Post-Quantum Cryptography for Consumer Electronics: Performance and Deployment
The 20-Second Summary
NIST standardized its first post-quantum cryptography algorithms, but consumer electronics teams still need a practical answer to “what can my device actually run?” This paper benchmarks PQC KEMs and signature schemes across macOS, Ubuntu, and Raspberry Pi 4 against RSA/ECC baselines to ground scheme choice in measured performance. The headline is that ML-KEM (Kyber) and ML-DSA (Dilithium) offer the best overall balance, while Classic McEliece is often blocked by key sizes and SPHINCS+ trades conservative assumptions for larger signatures and slower signing (arXiv 2025).
The Problem
The post-quantum cryptography transition is no longer theoretical. NIST finalized its first PQC standards in 2024: ML-KEM (based on Kyber) for key encapsulation and ML-DSA (based on Dilithium) for digital signatures. Additional schemes - SLH-DSA (SPHINCS+), FN-DSA (Falcon), and ongoing candidates like Classic McEliece and HQC - round out the landscape.
For server-side and desktop deployments, the transition path is relatively clear: library updates, protocol migrations, and certificate chain replacements. But consumer electronics presents a different challenge entirely. Smart home devices, wearables, automotive ECUs, medical monitors, and embedded sensors operate under constraints that servers don’t face:
limited CPU (often Cortex-M class), tight memory budgets, battery constraints, bandwidth limits (key and signature sizes hit OTA and handshake costs), and long deployment cycles where 10+ years in the field demands quantum-safe choices early.
The security community has produced excellent cryptographic analyses of PQC schemes, but what CE engineers need is systems-level performance data: how fast is keygen on a Raspberry Pi? How much RAM does signing consume? Will a Classic McEliece public key fit in my device’s flash?
This paper reports benchmarking results to help ground feasibility discussions in measured performance.
Why Existing Approaches Fall Short
| Resource | Limitation |
|---|---|
| NIST reports | Focus on security levels and mathematical properties, not hardware performance |
| liboqs benchmarks | Measure raw crypto operations but not system-level integration context |
| Academic PQC papers | Typically benchmark on high-end hardware (Intel Xeon, large RAM) |
| Vendor datasheets | Cover specific implementations but not cross-platform comparisons |
| Migration guides | Focus on protocol changes, not device-specific feasibility |
The core issue: CE engineers need a decision matrix that maps PQC scheme → device class → feasibility, and no one has built it from cross-platform empirical data.
Our Approach: Cross-Platform PQC Benchmarking
We conducted a systematic performance study of NIST PQC standards and candidates, benchmarked against classical RSA and ECC baselines, across three platforms representing different tiers of consumer electronics hardware.
Platforms
| Platform | Represents | Key Specs |
|---|---|---|
| macOS (Apple Silicon) | High-end consumer devices, laptops, smartphones | Fast CPU, ample RAM |
| Ubuntu (x86-64) | Desktop/server-class, gateway devices | Standard compute baseline |
| Raspberry Pi 4 | Constrained devices, IoT gateways, edge nodes | ARM Cortex-A72, 1-4 GB RAM |
The Raspberry Pi 4 is particularly important: it sits at the boundary between “comfortable” and “constrained” computing. If a scheme performs well on a Pi 4, it’s likely feasible for a wide range of IoT gateways and edge devices. If it struggles, deployment on more constrained hardware is doubtful.
Algorithms Benchmarked
We benchmark KEMs (ML-KEM/Kyber, Classic McEliece, and HQC), signature schemes (ML-DSA/Dilithium, SLH-DSA/SPHINCS+, and FN-DSA/Falcon), and classical baselines (RSA-2048/3072/4096 and ECDSA P-256/P-384).
Metrics
For each scheme on each platform, we measure key generation and operation latency (encapsulation/signing and decapsulation/verification), message sizes (public keys plus ciphertexts/signatures), and memory indicators (peak allocation during operations).
Key Results
The benchmarking reveals clear tiers of PQC feasibility for consumer electronics:
ML-KEM (Kyber): The All-Rounder
ML-KEM consistently showed a strong balance across platforms and metrics. Key generation, encapsulation, and decapsulation were fast - in many cases faster than RSA at comparable security levels. Key and ciphertext sizes are compact enough for many CE protocols.
On Raspberry Pi 4, ML-KEM-768 operations complete in low single-digit milliseconds, and key material (hundreds of bytes to ~1.5 KB depending on parameters) is manageable for storage and transmission—making ML-KEM a reasonable default for many key-exchange deployments.
ML-DSA (Dilithium): The Practical Signer
ML-DSA performed strongly across platforms, with signing and verification times competitive with or faster than RSA. Signature sizes (~2.4-4.6 KB depending on security level) are larger than ECDSA but acceptable for most applications.
Verification is particularly fast, which matters for devices that verify more than they sign (e.g., firmware update recipients). On Raspberry Pi 4, ML-DSA-65 remains practical for many real-time-ish workloads.
Classic McEliece: Security at a Cost
Classic McEliece offers strong, conservative security based on well-studied code problems. But its public key sizes are enormous - hundreds of kilobytes to over a megabyte depending on the parameter set.
That makes storage and transmission hard on constrained devices. It may still make sense for server-to-server settings or long-lived roots where key size is less binding, but it’s generally impractical as a default choice for CE.
SLH-DSA (SPHINCS+): Conservative but Heavy
SPHINCS+ is hash-based and makes minimal cryptographic assumptions - appealing for long-term security. But signatures are large (tens of kilobytes) and signing is slow compared to lattice-based alternatives.
Verification is faster than signing, but still typically slower than ML-DSA, so SPHINCS+ is best reserved for use cases where size and speed are secondary to conservative assumptions (for example, certain firmware signing roots).
FN-DSA (Falcon): Compact but Complex
Falcon produces the smallest signatures among PQC schemes, but its implementation complexity (lattice Gaussian sampling, floating-point arithmetic) raises concerns for constrained devices where timing side-channels are harder to mitigate.
Classical Baselines
RSA key generation was significantly slower than all PQC KEMs and signatures at comparable security levels. ECDSA remained competitive on speed and size but offers no post-quantum security.
How We Evaluated
All benchmarks used the liboqs library (Open Quantum Safe project) to ensure consistent implementations across platforms. Each operation was repeated multiple times to produce stable timing medians and we controlled for system load, thermal throttling (important on Pi 4), and memory pressure.
We measured wall-clock time rather than CPU cycles to reflect real-world conditions. For the Raspberry Pi 4, we additionally monitored thermal throttling behavior during sustained cryptographic operations, since CE devices in enclosed housings face similar thermal constraints.
Discussion
The benchmarks help compare scheme trade-offs on realistic hardware: latency, key and ciphertext/signature sizes, and variability across platforms.
Practical takeaways are largely about trade-offs rather than a single best choice:
Scheme selection depends on constraints (latency, memory, bandwidth) and security goals; feasibility varies across device classes, so Raspberry Pi results are informative but not a substitute for microcontroller measurements. Still, operation timings and message sizes provide a concrete way to estimate protocol impact (handshakes, OTA updates) before committing to a migration.
Limitations and Next Steps
Current Limitations:
Three platforms is a start, but it doesn’t cover the full CE spectrum (Cortex-M, RISC-V, FPGA, and secure elements matter). The liboqs implementations may differ from vendor-optimized or hardware-accelerated versions, and the measurements are for individual operations rather than full protocol integrations (TLS handshakes, MQTT, OTA chains). Side-channel resistance is also out of scope here, even though constrained devices are often the most exposed to timing and power analysis.
Future Work:
Next, the obvious step is moving down the stack: benchmark on microcontrollers (Cortex-M4/ESP32/nRF52), measure protocol-level performance (TLS 1.3/DTLS for IoT), and evaluate hardware-accelerated implementations where crypto coprocessors exist. A high-value output would be a public decision tool mapping device constraints to feasible schemes, and energy-cost measurement is essential for battery-powered devices.