My adventure in designing API keys

(vjay15.github.io)

89 pontos | por vjay15 3 dias atrás

22 comentários

  • bob1029
    12 horas atrás
    I don't understand the need for this level of engineering. It appears we are going for an opaque bearer token here. The checksum is pointless because an entire 512 bit token still fits in an x86 cache line. Comparing the whole sequence won't show up in any profiler session you will ever care about.

    If you want aspects of the token to be inspectable by intermediaries, then you want json web tokens or a similar technology. You do not want to conflate these ideas. JWTs would solve the stated database concern. All you need to store in a JWT scheme are the private/public keys. Explicit tracking of the session is not required.

    • notpushkin
      11 horas atrás
      > The checksum is pointless because an entire 512 bit token still fits in an x86 cache line

      I suppose it’s there to avoid round-trip to the DB. Most of us just need to host the DB on the same machine instead, but given sharding is involved, I assume the product is big enough this is undesirable.

      • phire
        11 horas atrás
        You need to support revocation, so I'm not sure it's ever possible to avoid the need for a round trip to verify the token.
        • kukkamario
          11 horas atrás
          The point of the checksum is to just drop obviously wrong keys. No need to handle revocation or do any DB access if checksum is incorrect, the key can just be rejected.
          • ben-schaaf
            6 horas atrás
            That sounds like it's only helpful for ddos mitigation, in which case the attacker could trivially synthesize a correct checksum.
            • phire
              5 horas atrás
              You don't have to use a publicly documented checksum.

              If you use a cryptographically secure hashing algorithm, mix in a secret salt and use a long enough checksum, attackers would find it nearly impossible to synthesise a correct checksum.

              • ben-schaaf
                5 horas atrás
                I don't follow. The checksum is in "plain text" in every key. It's trivial to find the length of the checksum and the checksum is generated from the payload.

                Others have pointed out that the checksum is for offline secret scanning, which makes a lot more sense to me than ddos mitigation.

      • rrr_oh_man
        11 horas atrás
        > I assume the product is big enough

        Experience tells otherwise

      • locknitpicker
        8 horas atrás
        > I suppose it’s there to avoid round-trip to the DB.

        That assumption is false. The article states that the DB is hit either way.

        From the article:

        > The reason behind having a checksum is that it allows you to verify first whether this API key is even valid before hitting the DB,

        This is absurdly redundant. Caching DB calls is cheaper and simpler to implement.

        If this was a local validation check, where API key signature would be checked with a secret to avoid a DB roundtrip then that could see the value in it. But that's already well in the territory of an access token, which then would be enough to reject the whole idea.

        If I saw a proposal like that in my org I would reject it on the grounds of being technically unsound.

    • Hendrikto
      6 horas atrás
      JWTs solve some problems but then come with a lot of their own. I do not think they should be the goto solution.
    • vjay15
      10 horas atrás
      Hello bob! the checksum is for secret scanning offline and also for rejecting api keys which might have a typo (niche case)

      I just was confused regarding the JWT approach, since from the research I did I saw that it's supposed to be a unique string and thats it!

      • petterroea
        10 horas atrás
        I may be naive but I can't imagine anyone typing an api key by hand. Optimizing for it sounds like premature optimization, surely stopping the less than one in a million HTTP request with a hand-typed API key from reaching the db isn't worth anything
        • vjay15
          9 horas atrás
          if not for typo, then I can use for secret scanning then :)
      • bob1029
        10 horas atrás
        The neat thing about JWT is that there are no secrets to scan for. Your secret material ideally lives inside an HSM and never leaves. Scanning for these private keys is a waste of energy if they were generated inside the secure context.
        • agwa
          7 horas atrás
          But JWTs are usually used as bearer tokens when doing API authentication. Those are definitely secrets that need to be scanned for.

          Or are you suggesting that the API requests are signed with a private key stored in an HSM, and the JWT certifies the public key? Is that common?

        • vjay15
          8 horas atrás
          Ideally API key shouldn't contain anything regarding the account or any info right? it's meant to be an opaque string, is what I found in most of the other articles I read. Please do let me know if I am wrong about this assumption
          • ijustlovemath
            7 horas atrás
            JWT operates on a different principle; the user's private key (API key) never leaves the user's device. Instead, the stated "role" and other JSON data are signed with the servers pubkey, then verified by the server using its master key, granting the permissions that role allows.
          • miningape
            7 horas atrás
            Look at the JWT standard, it usually contains things like claims, roles, user ids, etc.
      • arethuza
        10 horas atrás
        "for rejecting api keys which might have a type" - assuming that is meant by to be "typo" - won't they get rejected anyway?
        • vjay15
          10 horas atrás
          it's just an added benefit, I don't have to make a DB call to verify that :)
  • weitendorf
    11 horas atrás
    Hey OP, sorry for the negativity, I think most of these commenters right now are pretty off-base. My company is building a lot of API infrastructure and I thought this was a great write up!
    • vjay15
      9 horas atrás
      It is alright, I am learning a lot from them as well, healthy criticism is always useful :) I am very glad that you found this a great write up ^_^
  • randomint64
    11 horas atrás
    While it's true that API keys are basically prefix + base32Encode(ID + secret), you will want a few more things to make secure API keys: at least versioning and hashing metadata to avoid confused deputy attacks.

    Here is a detailed write-up on how to implement production API keys: https://kerkour.com/api-keys

    • jeremyloy_wt
      9 horas atrás
      I don’t understand your explanation on mitigating the confused deputy. If the attacker has access to the database, can’t they just read the IDs for the target row they are overriding first so they can generate the correct hash?
      • randomint64
        8 horas atrás
        The attack would be like: attacker has read/write access to the database but not to the code of the backend service. Attacker swaps the hash of a targeted API key with the hash of their own API key. Attacker has now access to the resources of the targeted organization when using their own API key.
    • vjay15
      10 horas atrás
      Thank you! I will definitely look into it!
  • Savageman
    11 horas atrás
    Side note: the slug prefix is not primarily intended for the end-user / developer to figure out which kind of key it is, but for security scanners to detect when they are committed to code / leaked and invalidate them.
    • vjay15
      10 horas atrás
      Ahhhh I see, I didn't think about it that way too, this could help us a lot yea!!!
  • tjarjoura
    7 horas atrás
    I've always been interested in the technical distinction between an API "key" and an API "token". And the terminology of "key" used to confuse me, because I associated that with cryptography, and I thought an API key would be used to sign or encrypt something. But it seems that in many cases it's basically just a long, random password.
  • ramchip
    11 horas atrás
    The purpose of the checksum is to help secret scanners avoid false positives, not to optimize the (extremely rare) case where an API key has a typo
    • matja
      9 horas atrás
      I suppose there could be two checksums, or two hashes: the public spec that can be used by API key scanners on the client side to detect leaks, and an internal hash with a secret nonce that is used to validate that the API key is potentially valid before needing to look it up in the database.

      That lets clients detect leaks, but malicious clients cant generate lots of valid-looking keys to spam your API endpoint and generate database load for just looking up API keys.

    • vjay15
      10 horas atrás
      thank you so much ram chip :) I didnt know that!
  • calrain
    12 horas atrás
    I don't like giving away any information what-so-ever in an API key, and would lean towards a UUIDv7 string, just trying to avoid collisions.

    Even the random hex with checksum component seems overkill to me, either the API key is correct or it isn't.

    • andrus
      11 horas atrás
      GitHub introduced checksums to their tokens to aid offline secret scanning. AFAIK it’s mostly an optimization for that use case. But the checksums also mean you can reveal a token’s prefix and suffix to show a partially redacted token, which has its benefits.
    • sneak
      9 horas atrás
      Identifying an opaque value is useful for security analysis. You can use regex to see when they are committed to repos accidentally, for example.
  • vjay15
    3 dias atrás
    Hello everyone this is my third blog, I am still a junior learning stuff ^_^
    • notpushkin
      11 horas atrás
      Hey, welcome to HN!

      Reading “hex” pointing to a clearly base62-ish string was a bit interesting :-)

      Also, could we shard based on a short hash of account_id, and store the same hash in the token? This way we can lose the whole api_key → account_id lookup table in the metashard altogether.

      • vjay15
        10 horas atrás
        Hello thanks for reading through my blog :D Coming to your question, yes! that is possible I mentioned it in my second approach!

        But when I mentioned it to my senior he wanted me to default with the random string approach :)

    • vjay15
      10 horas atrás
      I NEVER THOUGHT I WOULD BE IN THE MAIN PAGE OF HACKERNEWS THANK YOU SO MUCH GUYS (╥﹏╥)
  • petterroea
    10 horas atrás
    A bit over-engineered, but it was fun to read about observations on industry standard API keys. I agree it would be nice with more discussion around API keys and qualities one would want from them.
  • pdhborges
    10 horas atrás
    I don't even understand what approach 3 is doing. They ended up hashing the random part of the API key with an hash function that produces a small hash and stored that in the metashard server is that it?
    • vjay15
      10 horas atrás
      yea... sorry I still am not the best explainer but that is the approach, I just wanted to have a shorter hash in the meta shard that is it. The approach 3 is an attempt by me to generate my own base62/base70 encoder ;-;
  • tlonny
    10 horas atrás
    Presumably because API keys are n bytes of random data vs. a shitty user-generated password we don’t have to bother using a salt + can use something cheap to compute like SHA256 vs. a multi-round bcrypt-like?
    • agwa
      7 horas atrás
      Correct.

      Even a million rounds of hashing only adds 20 bits of security. No need if your secret is already 128 bits.

    • vjay15
      9 horas atrás
      I can't understand what you are trying to say :o
      • numbsafari
        9 horas atrás
        How are you storing the API key in your database?
        • vjay15
          9 horas atrás
          hash of the API key just like passwords
          • stanac
            5 horas atrás
            I think they are saying passwords are salted and we use multiple rounds of hashing to prevent rainbow tables and slow down brute-forcing the password (in case of db leak). We don't need to do that for randomized long strings (like api keys), no one is guessing 32 character random string, so no salt is needed and we don't need multiple rounds of hashing.
  • matja
    9 horas atrás
    What if the "slug" was a prefix for the API key revocation URL, so the API key was actually a valid URL that revoked itself if fetched/clicked? :)
  • amelius
    9 horas atrás
    It's a bit confusing that the "Random hex" example contains characters such as "q" and "p".
    • vjay15
      9 horas atrás
      I don't understand your question :o
      • onei
        9 horas atrás
        Hex is 0-9, a-f. P and q are outside that character set.
        • vjay15
          9 horas atrás
          yes, you are right onei, it is supposed to be random string instead of hex, I am sorry I made that mistake
    • vjay15
      8 horas atrás
      fixed it in the blog, thanks for pointing it out amelius ;-;
  • dhruv3006
    12 horas atrás
    Hey - this was a great blog ! I liked how you used the birthday paradox here.

    PS : I too am working on a APIs.Take a look here : https://voiden.md/

  • hk__2
    8 horas atrás
    > I didn't proceed with this approach since I don't want the API keys to have any info regarding the account, but hey it is all just a matter of preference and opinion.

    Well I would have done that and saved half the blog post.

  • usernametaken29
    12 horas atrás
    I know sometimes people just like to try things out, but for the love of god do not implement encryption related functionality yourself. Use JWT tokens and OpenSSL or another established library to sign them. This problem is solved. Not essentially solved, solved. Creating your own API key system has a high likelihood of fucking things up for good!
    • fabian2k
      12 horas atrás
      You don't need any encryption or signing for API keys. Using JWTs is probably more dangerous here, and more annoying for people using the API since you now have to handle refreshing tokens.

      Plain old API keys are straightforward to implement. Create a long random string and save it in the DB. When someone connects to the API, check if the API key is in your DB and use that to authenticate them. That's it.

      • swiftcoder
        10 horas atrás
        > Plain old API keys are straightforward to implement

        This is pretty much just plain-old-api-keys, at least as far as the auth mechanism is concerned.

        The prefix slug and the checksum are just there so your vulnerability scanner can find and revoke all the keys folks accidentally commit to github.

        • vjay15
          10 horas atrás
          yes this is the approach!
      • iamflimflam1
        11 horas atrás
        I would add the capability to be able to seamlessly rotate keys.

        But otherwise, yes, for love of everything holy - keep it simple.

      • sabageti
        10 horas atrás
        We don't store it, in plain text right, store them hashed as always.
    • notpushkin
      12 horas atrás
      The securify here comes from looking the key up in the DB, not from any crypto shenanigans.
  • sneak
    9 horas atrás
    This is a very good example of premature optimization.
  • MORPHOICES
    12 horas atrás
    [dead]
  • adaptit
    10 horas atrás
    [dead]
  • grugdev42
    10 horas atrás
    Everything about this is over engineered. Just KISS.
  • codingjoe
    7 horas atrás
    Is this running in a production environment yet? If so, do you have an email address to disclose a vulnerability?
    • vjay15
      7 horas atrás
      no this is just a POC, I haven't implemented any of it