• 0 Posts
  • 2 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • Basically, all encryption multiplies some big prime numbers to get the key

    No, not all encryption. First of all there’s two main categories of encryption:

    • asymmetrical
    • symmetrical

    The most widely used algorithms of asymmetrical encryption rely on the prime factorization problem or similar problems that are weak to quantum computers. So these ones will break. Symmetrical encryption will not break. I’m not saying all this to be a pedant; it’s actually significant for the safety of our current communications. Well-designed schemes like TLS and the Signal protocol use a combination of both types because they have complementary strengths and weaknesses. In very broad strokes:

    • asymmetrical encryption is used to initiate the communication because it can verify the identity of the other party
    • an algorithm that is safe against eavesdropping is used to generate a key for symmetric encryption
    • the symmetric key is used to encrypt the payload and it is thrown away after communication is over

    This is crucial because it means that even if someone is storing your messages today to decrypt them in the future with a quantum computer they are unlikely to succeed if a sufficiently strong symmetric key is used. They will decrypt the initial messages of the handshake, see the messages used to negotiate the symmetric key, but they won’t be able to derive the key because as we said, it’s safe against eavesdropping.

    So a lot of today’s encrypted messages are safe. But in the future a quantum computer will be able to get the private key for the asymmetric encryption and perform a MitM attack or straight-up impersonate another entity. So we have to migrate to post-quantum algorithms before we get to that point.

    For storage, only symmetric algorithms are used generally I believe, so that’s already safe as is, assuming as always the choice of a strong algorithm and sufficiently long key.


  • This is really funny to me. If you keep optimizing this process you’ll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.

    On this topic, here’s another common anti-pattern that I’m waiting for people to realize is insane and do something about it:

    • person A needs to convey an idea/proposal
    • they write a short but complete technical specification for it
    • it doesn’t comply with some arbitrary standard/expectation so they tell an AI to expand the text
    • the AI can’t add any real information, it just spreads the same information over more text
    • person B receives the text and is annoyed at how verbose it is
    • they tell an AI to summarize it
    • they get something that basically aims to be the original text, but it’s been passed through an unreliable hallucinating energy-inefficient channel

    Based on true stories.

    The above is not to say that every AI use case is made up or that the demo in the video isn’t cool. It’s also not a problem exclusive to AI. This is a more general observation that people don’t question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.