TUF - The Update Framework

Hello,

In our company we did some testing with Mender and until now are quite satisfied. We are wondering how Mender, and especially the security of the updates, compare to the work done in TUF.
So how much does it deviate?
Is this on purpose because of a different goal?

With kind regards,

Gerrit Binnenmars

I am tagging @eystein who might have additional insights on the topic.

But from understanding TUF is reference implementation/specification of the OTA security on theoretical level with guidelines, and not an functional OTA system.

Comparing their guidelines to the Mender security model in high detail would take some effort and I would leave that to someone else :slight_smile:

But looking at it from a higher level, from TUF:

We can think of a software update system as “secure” if:

  • it knows about the latest available updates in a timely manner
  • any files it downloads are the correct files, and,
  • no harm results from checking or downloading files.

Where I believe Mender checks all boxes.

Hi @Gerrit!

We did evaluate building in TUF (there is a reference implementation) when Mender was started, but decided not to because it:

  • Does not offer significant increased security for most security requirements/threat models
  • Is very complex and would make Mender harder to use and maintain, in particular in terms of key management with different roes and purposes

“How secure” something is needs to be defined by a threat model to do a formal analysis. So the question first is: What are you trying to protect against?

In general Mender protects against the common scenarios out of the box:

  • Server spoofing: TLS
  • Client spoofing: JWT authenticaiton based on unique RSA keypair on client
  • Any communication listening/meddling: TLS
  • Software tampering (end-to-end): Artifact signatures
  • Client compromise: Unique keys and Artifact download links, ability to reject & decommission

I think TUF could protect against some additional scenarios where the server and/or software repository gets compromised. For example to avoid downgrade attacks (downgrading to vulnerable software that was installed in the past). However this conflicts with the ability roll back in Mender, which is an important robustness criterion.

In terms of design principles we do want to make sure Mender is easy to use, and robust (e.g. supports rollback), and I think this is where TUF would conflict the most.

As you probably know this is a large topic, so would be happy to discuss further if you had any specific attack vectors or threat model in mind!

Hello @eystein

Thanks for your clear answer.
For us roll-back is essential, so an important point to take into account.

Still some more questions, although a bit out of scope of the original topic.

  1. Why do you state that TUF protects better against software repository compromise when software tampering (end-to-end) is solved with artifact signatures?
  2. Client compromise: would the usage of TPM chips on client motherboards improve the client security significantly in your opinion?
  3. The design is now based on a single persistent data module. Currently we use a proprietary solution that besides two root partitions has two application software partitions and one common data partition. Would it be difficult to adapt mender to support more partition schemes?
  4. Does mender provide a solution for the initial installation on a blanc disc?
  5. At some point, the bootloader arguments have to be adapted to switch to the alternative partition. How is this protected? In other words, does mender assure that there is always a situation where a new software update can be done?

With kind regards,

Gerrit

Hi again @Gerrit !

Sure, happy to help! Keep in mind I am not an expert in TUF and you might want to consult with someone if you need professional guidance. But I will answer based on my understanding.

  1. It is true that Mender supports end-to-end signatures so the software repository part is not that relevant. I know there is some support for signature expiry (and the complexity it entails for long-time offline IoT devices) which I think could provide some additional protection in some attack scenarios – but there is a tradeoff of course.

  2. Yes. This is not currently supported in the Mender client, but it is something we are considering (there are some other threads on this in this forum). It is always a good idea to protect secret keys as much as possible, especially when they are used for an important purpose. Note however, that the only secret key a Mender client has is the unique private RSA key it uses to authenticate with the server. So if that gets stolen a rouge client can pretend to be another client (and it can be rejected on the server once you notice it). The Artifact signing keys are not on any client though, only the public verification keys are.

  3. I think we would need more information to assess this. In general it’s not a problem that you have multiple partitions. If you want to update them as well in some custom way this needs to be integrated. You can look at Update Modules and develop custom modules. Otherwise the Mender team offers consulting services to help with this.

  4. No. There are several generic and vendor-dependent tools out there for this purpose. Also see the documentation on Provisioning: https://docs.mender.io/2.2/artifacts/provisioning-a-new-device

  5. I’m not quite sure I understand this question, but Mender implements a fully atomic update process. There is no “critical section”, so no matter when you lose power during the update process the client will bring the device back online. It will always roll back automatically if it is not able to connect to the Mender server after the update. You can read a bit more about board integration here: https://docs.mender.io/2.2/devices. Otherwise the Mender team offers consulting services to help with this as well.