Factorization of large Primes made easier? Claims on 2048-bit primes to be pathologically broken.

Factorization of large Primes made easier? Claims on 2048-bit primes to be pathologically broken.

The factorization of very large prime numbers is a problem that is greatly focused especially for information security purposes. The product of large primes is used as an encryption standard in many cybersecurity algorithms such as RSA, ECC etc. Currently, RSA-2048, which has key size of 2048 bit i.e. the size of prime numbers, is considered to be the most secure algorithm till now. NIST published the RSA-2048 to be secure till the end of 2030 and this, only because of the development of quantum computers seem to mature and implementation of Shor’s algorithm seemingly becoming possible for factoring larger prime numbers. Keeping in mind, this requires a lot of resources i.e. logical qubits, ancillas, control qubits, error correction, physical qubits and RF electronics etc. Thus, becoming a costly process, and only allowing wealthy governments or organizations to have access to such technology. Thus, breaking RSA was an uneven field.

Very recently, Dr. Ed Gerck and his team, from Planalto research institute claimed to level this field by to use quantum computing technique and not the Shor’s algorithm to pathologically break RSA-2048. Which implies that, further increasing the key size will also not make RSA safer. The author stops not at this, they reduce the cost of the whole process to be cheaper then $1000 and implemented on classical hardware such as cellphones. Where seemingly, the only concept of quantum computing and algorithms is being utilized in software prospective. 

The detailed insight gained by reading the preprint shared by the author gives the following information,

  1.  The authors reject the use of irrational, complex and p-adic numbers and only take into account the rational (positive and negative) for numerical calculations. Here, the authors take iota ‘I’ as 90-degree rotation. Thus, reducing the complex numbers as rational numbers. So, author maps on all calculations as rational R=C= Q on {0,1} of the hardware. This also speeds-up calculations in one way as claimed by the author.
  2. Further, they use ‘multi-state logic’ in software to gain faster than exponential speed-up and relying on factorization on basis of addition rather than multiplication or division method. The parallelism for computing of the GCD is utilized but in a non-quantum, quantum way. Where, the multi-state logic is implemented on the binary logic-based device by optimized mapping of GF(3m) —> GF(2n) or the special-purpose FPGA can be used instead. 
  3. The use of partition function for factorization by means of addition. Where any combination of 1 is ignored (trivial, as it is multiplicative identity) and the approach is to find the largest two factors that make the number N. 

Here, the base argument is that, the physical system allows addition much faster as compared to the other arithmetic operations. Also, the three-state logic or multi-state logic is a classical approach, which can be employed in the software while the hardware keeps operating on the Boolean logic.

  1. Supposedly, the quantum adiabatic algorithm is used for modules calculations where, the algorithm expands polynomial with problem size. Only in this process, the quantum resource is used. 

The authors purposed in a previous paper that, using the exact solutions from the difference equations based on the Schrodinger equation, that exact solutions are possible. Thus, the Bohr all at once approach can be dealt in giving the precise solutions in the step 4 discussed above and not an estimate, thus rejecting the Heisenberg interpretation of quantum mechanics and giving only exact solutions of the problem. The authors argue that, quantum mechanics can give exact solutions in some other papers as well.

Personal Statement:  The multi-state logic used here, can give more than exponential time speed up in the classical domain, While, the hardware still classical can be optimized by mapping projection of GF(3) —> GF(2). In addition, the parallel computing of quantum is utilized for modulus calculation something similar to Shor’s algorithm. Here, the fundamental mathematics involves addition operation only, which is based on Madhva’s partition function for calculation of factors. 

There paper is a major hype and pre-print has made adding to the confusion by certain claims discussed in this article.

5 thoughts on “Factorization of large Primes made easier? Claims on 2048-bit primes to be pathologically broken.

  1. How rejecting the Heisenberg interpretation mathematically influence better solutions in the implementation of QC algorithms. Does it also signify the physical reality behaves different?

  2. How multi-state logic in quantum mechanics can be used with rational numbers, partition function and all-at once logic to gain exponential speed up, just in software? While the hardware is classical? Or both are or aren’t quantum mechanical? Here, there is a major confusion.

  3. How Madhvan’s calculus works and how can it give faster results of prime factorization for RSA? Also, How quantum mechanics play a part here ?

Leave a Reply

Your email address will not be published. Required fields are marked *