Security

Open-source silicon: How good is it for security, really?

Open-source silicon: How good is it for security, really?

The security community has spent many years debating whether open-source software is good for security1. Much of that discussion and its conclusions directly apply to open-source silicon. Undoubtedly, open source silicon is a net positive for security. However, to fully realize the security benefits of open-source silicon, it is important to understand the nuances of silicon development.2 This post aims to explain some of these nuances to a reader who is already familiar with how open source helps software security. Specifically, we will cover silicon specific nuances in these four topics:

  • Source code transparency: Do I really know if I am reviewing the source code for the right product? Does it matter?
  • Source code as a model of reality: Does silicon source code fully model silicon behaviour? I promise, this is a legitimate question to ask in the silicon world.
  • Quick turnaround on fixes: If silicon is immutable, how do we fix silicon issues that we identify by source code analysis?
  • Unique threats: Are there unique silicon threats whose exploitation is made easier by making source code openly available?

Let’s begin.

Source code transparency

If the smoothie recipe was posted on Github, would that guarantee there are no slugs in the smoothie the waiter serves?

We play a silly game in our family. Whenever someone asks what’s for dinner, the rest of us respond with the most ridiculous and disgusting dishes we could think of, like a slug smoothie, earthworm noodles or grilled cockroaches. You get the idea. Now, consider this: If a smoothie recipe was posted on Github, would that guarantee there are no slugs in the smoothie the waiter serves? Of course not! Because just by looking at the smoothie, you cannot tell if it was prepared using the open source recipe or if the chef was feeling adventurous that day. This highlights one way open source silicon falls short of the transparency of open source software.

When it comes to software, it’s easy to confirm that the final product uses the publicly available source code. You can hash the actual binary from the product and compare it with the hash of the compiled binary from the source repository. When Google claims their Pixel phone uses a version of Android in which a particular CVE is fixed, you don’t have to take their word for it; you can verify.

Silicon is different; you cannot hash the silicon and compare it to a git repository. When the manufacturer points to some RTL and says that’s what they have in their silicon, you have to take their word for it. This may or may not be a problem, depending on the trust and confidence you have in the manufacturer and their capabilities. Do they have an incentive to misrepresent? Are they competent enough to track changes and bug fixes in the products they ship?3

Source code as a model of reality

RTL source code does not provide the security reviewer with all the information needed to identify all silicon vulnerabilities.

The source code of software is a complete representation of how the program functions.4 It provides everything that security reviewers need to identify all software vulnerabilities. However, the source code for silicon, called RTL source code, only captures the logical behavior of the hardware. It does not include information about other physical properties that can affect functionality and security. Examples of such physical properties include capacitive coupling and doping density properties that may be leveraged by fault injection attacks.

Unlike software, the source code for silicon does not provide the security reviewer with all the information needed to identify all silicon vulnerabilities. Circuit level implementation of the RTL source may introduce new vulnerabilities5 or mitigate apparent vulnerabilities in the RTL source.6 Open source RTL is still very useful in identifying logical vulnerabilities in the RTL as well as reviewing (but not evaluating) any logical countermeasures that may be implemented against a physical/electrical attacks. However, it is very important to recognize this limitation. Otherwise, our experience with open-source software may lull us into a false sense of security when it comes to open-source silicon.7

Quick turnaround on fixes

The silver lining is the unreasonable effectiveness of software8 workarounds in addressing silicon issues.

Open-source software allows for vulnerabilities to be identified and fixed quickly, which benefits defenders. Similarly, open-source silicon also makes it easy to identify vulnerabilities, but unlike software, silicon cannot be patched in the field. It may seem that this tilts the balance in favour of the attackers, but this is not the case. The silver lining is the unreasonable effectiveness of software8 workarounds in addressing silicon issues9.

Embedded software engineers are the unsung heroes of silicon development.

I’ll let you in on a secret from the world of silicon development: Every silicon product that is shipped has unfixed bugs. The more modern the technology node, the more confounding and debilitating the bugs are. The reason that any silicon works at all is the industry’s ability to work around silicon bugs with software fixes. Embedded software engineers are the unsung heroes of silicon development. Such effectiveness of software workarounds in fixing silicon issues is not a happy coincidence. Building flexibility in silicon to mitigate bugs using software workarounds is a deliberate risk mitigation strategy. It has been proven to be very effective.

To realize the security benefits of the open source model, silicon projects must lean into this strategy of software fixes for silicon issues. Treating open source silicon projects as purely hardware projects is a recipe to failure. Minimally, open source silicon projects should

  • Include low level software within the scope of the project
  • Support upgradable firmware
  • Prioritize flexibility in silicon to allow software workarounds

There are silicon products that do not support in-field software upgrades (e.g. smartcards in credit cards, many industrial IoT chips). For such products, it is reasonable to be concerned about open-source silicon tipping the balance in favor of attackers10.

Unique silicon threats

The industry employs security-by-obscurity techniques as part of a defence strategy against such silicon modification attacks.

For software security, attacks that may modify the underlying binary are often outside the threat model the software layer under analysis. Generally, we rely on higher privileged layers (either software or hardware) in the stack to provide the assurance that the binary does not change on the fly. However, for certain silicon products that require a high level of security assurance (e.g. EAL4+), attacks that involve modifying silicon circuitry (e.g. FIB attacks) - including attacks on circuits that implement countermeasures, are within the scope of the threat model. The fact that the layer in which we implement countermeasures (silicon HW layer) is itself subject to modification by the attack leaves very little room for effective technological countermeasures.

The industry employs security-by-obscurity techniques as part of a defence strategy against such silicon modification attacks. Examples of such techniques include logic obfuscation, keyed obfuscation/encryption based on global keys embedded in RTL, GDS11 level obfuscation and camouflage etc. Security-by-obscurity conflicts with open source development model. In the narrow context of such attacks, open source development model may indeed tip the balance in favour of attackers.

Having said that, this extremely sophisticated attack is an edge case that is not a relevant threat in most threat models12. In my opinion, most products that claim to resist FIB attacks do so for marketing purposes rather than such attacks being a viable attack within the parameters of the threat model. When such attacks enter the discussion, it is a nuance to be carefully considered on a case-by-case basis before concluding whether open source development model is a security benefit for that product.

Key takeaways

The benefits of open source software for security are well established. When it comes to open-source silicon, there are nuances that must be understood to fully realize its security benefits.

  • Source code transparency: We still have to trust the manufacturer to point us to the correct source code. We cannot independently verify the manufacturers' claims on what source code they are using for the product.
  • Source code as a model of reality: The source code only represents the logical behavior of the silicon, not the physical behavior, which is also important for security analysis.
  • Quick turnaround on fixes: The immutability of silicon does not necessarily preclude patching silicon security issues. In practice, most silicon security issues can be mitigated by software fixes.
  • Unique threats: There are silicon attacks, such as FIB attacks, that may be made easier by the attacker having knowledge of the source code.

  1. Here is a good overview by David Wheeler ↩︎

  2. In this post, I will focus on open source silicon development. But some of these aspects may apply to open source hardware more broadly. ↩︎

  3. IMO, a frightening number of silicon manufacturers are not competent in enough in source code change management. ↩︎

  4. I’m ignoring complexities with software bill of materials, compiler configurations etc. Since open source silicon has analogous issues, these are not particularly relevant for our discussion. ↩︎

  5. Many electrical or electromagnetic side channel vulnerabilities are not apparent from the source ↩︎

  6. Fault injection susceptibility can be greatly reduced by physical techniques such as careful placement of registers, shielding of traces etc. ↩︎

  7. Abstractions is another good way to look at this topic. i.e One could say: “Security argument for open source software abstracts away the hardware behavior by assuming the hardware will behave per the ISA contract. Similarly, the security argument for open source RTL abstracts away the underlying circuit behavior by assuming the circuit behaves a certain way.” The reason I did not take this approach is that in some cases, especially in bleeding edge technology nodes lower-level aspects of the contract between RTL and circuit behavior is not very well defined and it constantly evolves over many years. It is not uncommon to get the silicon back from the foundry and find that the silicon behavior is wildly different from pre-silicon models provided by the foundry. The abstraction layer approach is not quite meaningful when the contract across the layers is not precise. ↩︎

  8. I’m gonna use the terms “software” and “firmware” interchangeably. ↩︎

  9. While it is theoretically possible to have silicon bugs that cannot be worked around in software, in practice, such bugs occur infrequently enough that it does not tip the balance in favor of attackers. In ~20 years of silicon development, I don’t recall any instance in which a security bug in RTL had no software workaround forcing us to re-spin the chip. ↩︎

  10. My sympathy for the situation that these products find themselves is conditional. Oftentimes, not having a software upgrade path is a business choice rather than a technological constraint. Sometimes, it is a reasonable choice. For example: If the smart card in the credit card becomes vulnerable, it is cheaper for the bank to reissue a new credit card than to develop and maintain the infrastructure for upgrading the smart card software. But often, the business choice is an artifact of misaligned incentives. ↩︎

  11. GDS is a file format that is used to exchange IC layout information to the fab. The relationship between RTL source and GDS is analogous to the relationship between software source code and compiled binary. ↩︎

  12. This is an extremely sophisticated attack. In bleeding edge technology nodes, the cost of the attack and the expertise required is typically within the realm of nation states and industry-leading silicon technology corporations. The complexity and cost exponentially goes down with time. In older technology nodes, the cost of the attack is typically a few thousand dollars. Unless global secrets are involved, most threat models (Eg: Credit cards, transit cards) should be able to manage keeping these attacks outside the threat model. If global secrets are involved - then security-by-obscurity likely will not raise the bar enough to protect your assets. Cable TV and printer catridge industries found this out the hard way. ↩︎