Skip to content

Conversation

@rizlik
Copy link
Contributor

@rizlik rizlik commented Oct 28, 2025

Implement HMAC state caching with a dedicated key type and mirrored lifecycle handling.

  • Add WH_KEYTYPE_HMAC_STATE so cached HMAC contexts are isolated from normal crypto keys, permanently flagged as NONEXPORTABLE/EPHEMERAL, and prevented from being reused or committed to NVM.
  • Streamed HMAC requests now serialize the wolfCrypt Hmac struct into the keystore on each update, reload it for subsequent chunks, and evict it whenever a final/abort/error path runs, with the client clearing its stateId in
    lockstep to avoid dangling handles.

NOTE: the PR stores the HMAC state inside the sever. Ideally we want to use the new wrapping API and store the state securely on the client instead. This will be an improvement on this PR. My main open questions on that approach are:

  1. Where to store the state on the client?
  2. How to prevent replay of the state (if the attack is in scope)

a separate type so that client can't leak the state by either exporting
or using the stateId as key in other cryptographic operations.
@rizlik rizlik marked this pull request as ready for review November 7, 2025 14:17
@rizlik rizlik requested review from bigbrett and billphipps November 7, 2025 14:17
@rizlik rizlik changed the title hmac: draft implementation wolfHSM: add HMAC support Nov 7, 2025
@rizlik rizlik changed the title wolfHSM: add HMAC support HMAC support Nov 7, 2025
Copy link
Contributor

@bigbrett bigbrett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @rizlik

Great work, but unfortunately the timing on this is a little bad - I see you followed the CMAC paradigm where the server holds the intermediate state, however we were about to do a refactor to change this behavior for CMAC.

Ideally we can get rid of any server-held state.

In the case of HMAC, where the initial first block update could leak the key material, @billphipps and I had discussed buffering this locally on the client side until there would be enough data present for this to not be an issue.

In the case of CMAC, I'm not sure if this is an issue - we still need to dig a bit.

I'm on the fence about including this if we are about to rip it all out - how would you feel about taking a stab at the implementation where the client holds all the state (with the buffering caveat mentioned above)?

Worst case we can get this in now but if we are about to refactor then I'm more inclined to just get it "right" the first time, and have HMAC set a precedent that CMAC could follow.

Thoughts? @billphipps please chime in too

@rizlik
Copy link
Contributor Author

rizlik commented Nov 7, 2025

Hi @bigbrett ,

Yeah, I agree both on the bad timing and on the huge limitation on having the state cached on server but I don't think the caveat can work, as I don't see a safe way to cache the state on the client.

In HMAC just caching the partial hash state of H(K^opad) and H(K'^ipad) is catastrophic as it provide easy offline forgery.
Having these two partial states allows to forge the HMAC for any message offline. If you have a more advanced partial state, like H(K^opad) K(K^ipad || m) you can always forge message M' with fixed prefix M.

It's also stated clearly in the FIPS document:

These stored intermediate
values shall be treated and protected in the same manner as secret keys.

In CMAC leaking the state is not such as catastrophic but it's a very bad practice, as no intermediate key-based material that's based on a key whose exporting is restricted should never leave the security boundary (exception for the output of the cryptographic operation, ofc).

The only solution I see right now is to wrap the sensible state on the server, after thoughtful consideration of replay attacks across users.

We also need a space to save wrapped data along HMAC and CMAC object on the client side.

The reason this was rushed up is that it might become critical on a delivery.

I'm OK in dropping this but then we probably need to define a clear roadmap to solve the blockers for the wrapping state solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants