2023 NSF REU Site on Trustable Embedded Systems Security

2023 Group

REU Cohort 2023 (top left to bottom right): Shyra LaGarde, Daniel Zhou, Nicholas Stamatakis, Victoria Wiegand, Maryam Abuissa, Tucker Beaudin, JiWon Kim, Brady Phelps, Armina Rizi, Alison Menezes

Shyra LaGarde, Georgia Institute of Technology [Mentors: Professor Walter Krawec and Professor Bing Wang]

Quantum noise in Quantum Key Distribution (QKD) Systems using simplified trusted nodes

Cyberattacks motivate governments and businesses to continually explore more secure methods of information transmission. An approach to ensure secure communication involves encrypting and transmitting sensitive data alongside digital decryption keys as bits through classical bit streams, such as electrical or optical pulses (0s and 1s). However, this method remains susceptible to undetectable interception and replication. Fortunately, ongoing research in quantum communication presents a promising solution. Quantum key distribution (QKD) utilizes the properties of quantum physics to establish networks for transmitting highly sensitive data securely. QKD systems are specifically designed to facilitate the secure transmission of cryptographic keys between two parties, typically referred to as Alice and Bob. The parties rely on both a classical channel and a quantum channel. An essential consideration in QKD protocol implementation is the existence of noisy quantum channels, occurrence of quantum noise arises due to the intrinsic uncertainties and fluctuations associated with the processing and transmission of quantum information. The primary focus of this project component is to design and implement a simulation framework that assess the performance of QKD systems that employ simplified trusted node and the BB84 protocol under various noise conditions.

Daniel Zhou, University of Maryland [Mentors: Professor Jerry Shi, Professor Kaleel Mahmood, and Professor Shengli Zhou]

Preamble Design and Generation Utilizing Cross Correlation

At the beginning of each transmission, there is a segment of data known as a preamble. The purpose of the preamble is to notify the receiver that there is a message inbound so that it can do its job. As of currently, the preambles used in transmissions are vulnerable to malicious hackers, who can use it to intercept transmissions or feed receivers false information. Because of this, we want to generate a new set of preambles to make transmissions more secure. With the generation of new preambles, we also need to make sure that our receiver can correctly receive and identify each preamble. To solve this problem we propose using a neural network to detect and classify preambles. Experimental results show that the performance of the machine learning model are not quite up to standards, but there is still research to be done.

Nicholas Stamatakis, Stony Brook University [Mentor: Professor Benjamin Fuller]

Adversarial Machine Learning in Voting Systems

This research focuses using adversarial machine learning to model the physical processes that cause distortion on ballots: printing and scanning. This distortion creates adversarial examples for the machine learning model. An adversarial example is a bubble that has it’s pixel values adjusted by some small factor ε. But it is enough to change the classification from vote to blank or vice versa. We have developed an image registration system (IMS) and a variety of transformations to fix distortions caused by the physical world. Work to build a machine learning model for this process is ongoing. The goal of this research is to ensure the accuracy of the tabulator in the state of Connecticut.

Victoria Wiegand, Villanova University [Mentor: Professor Benjamin Fuller]

Mapping Ease of Voting Index in Connecticut

Following the U.S. Supreme Court ruling disman- tling a section of The Voting Rights Act of 1965, Democratic party-led states responded by passing their own version of the act. In June of 2023, The Connecticut State Legislature passed an independent voting rights act requesting a database to investigate potentially discriminatory voting practices. We create a database and an interactive map tool to visualize voting accessibility for each voting district in Connecticut. Our tool will provide insight to voting rights investigators and the general public.

Maryam Abuissa, Amherst College [Mentors: Professor Omer Khan and Professor Caiwen Ding]

Sequestered Encryption for GPU

Performing inference for Graph Neural Networks (GNNs) under tight latency constraints has become increas- ingly difficult as real-world input graphs continues to grow. Compared to traditional DNNs, GNNs present unique com- putational challenges due to their massive, unstructured, and sparse inputs. Prior works have applied irregular and structured weight pruning techniques to reduce the complexity of GNNs with the goal of accelerating GNN performance. However, prior works using irregular pruning use floating point operations per second (FLOPS) to estimate GNN performance, which does not reveal the true performance implications of model sparsity caused by the diminished parallelism of sparse matrix multiplication kernels and limited sparsity propagation. In this paper, we first show quantitatively that irregular sparsity in GNN model weights does not generally propagate through matrix multiplication kernels, and when it does, is unable to be exploited to improve performance in parallel architectures that employ highly-vectorized hardware. While structured weight pruning can overcome these issues, the existing structured pruning work in GNNs introduce severe accuracy loss and is not scalable. To address these challenges, we propose PruneGNN, an optimized algorithm-hardware framework for structured GNN pruning. At the algorithm level, we use a structured sparse training method that achieves high sparsity and sparsity propagation while maintaining accuracy. At the hardware level, we propose a novel SIMD-aware mapping strategy for matrix multiplication kernels that unlocks performance gains with dimensionally- reduced model weights. We evaluate the efficacy of our proposed framework with an end-to-end GNN inference performance analysis using real-world dynamic and static graphs on commonly used GNN models. Experimental results using an Nvidia A100 GPU show that our framework achieves an average of 2× speedup over the prior structured weight pruning work.

Tucker Beaudin, University of Massachusetts, Amherst [Mentors: Professor Jerry Shi, Professor Kaleel Mahmood, and Professor Shengli Zhou]

Multiple Preamble Design, Generation, and Detection

The current standard for sending wireless data involves splitting a signal into two distinct portions: a preamble and a payload. Traditionally there has been a large amount of effort that has been expended improving the accuracy and redundancy of the payload. However, efficient implementations of preambles have often been overlooked. In this work, we propose a methodology for robust generation and design of a set of pream- ble patterns. This new design allows for both theoretical and observed benefits over older standards. Physically, our approach is a much less computationally expensive detection technique of preambles and provides for effective network virtualization. More theoretically, our approach achieves minimal cross-correlation results as well as desired auto-correlation properties.

JiWon Kim, University of Connecticut [Mentors: Professor Caiwen Ding and Professor Omer Khan]

Utilization of DeepShift for Privacy Based Machine Learning

Differential Privacy is an approach to help keep individual data to be private and prevent from potential hackers from obtaining private information. This technique has been utilized by large organizations such as the US census bureau, Google, and Apple when collecting data while keeping it private from adversarial attackers to obtain sensitive data. However, this process is computationally expensive resulting in slower speeds when training a model over the cloud. The use of clipping and adding gaussian noise are primarily responsible for the decreased speeds when training a machine learning model. Several proposals were made to boost machine learning models training by reducing computational complexity within the architecture. Therefore, in this paper, we specifically examine DeepShift, a technique that boosts neural network training speed as a potential solution to reduce training time while maintaining high accuracy. We present our studies that were conducted onto a graph convolutional neural network and discuss future work to be evaluated.

Brady Phelps, Ohio University [Mentors: Professor Walter Krawc and Professor Bing Wang]

Calculating Noise in Quantum Networks with Simplified Trusted Nodes

With the rise of Quantum Computing, modern security protocols are at risk of being breached by advanced quantum algorithms. This risk has brought fourth a variety of solutions. One solution, known as Quantum Key Distribution (QKD), uses quantum computing and the laws of physics to safely and securely exchange cryptographic keys which are needed for encryption algorithms. For QKD to be reliable, we have to know the expected noise in a given transaction of data. In this paper, the authors examine noise in a variety of quantum networks, which comprise of systems known as ”Trusted Nodes”, which serve as relays which perform a QKD handoff at each node. The formulas and simulation created in this research stand to benefit future researchers designing quantum networks by simplifying the prototyping phase and supplying usable theorems to perform calculations. These formulas and simulations work for both simplified and regular trusted node networks, but further research is to be done on the trusted node part with the addition of a key-rate proof for simplified trusted nodes.

Armina Rizi, Northeastern University [Mentor: Professor Amir Herzberg]

A User-Centric Approach to Secure Browsing on Online Platforms

With more people beginning to rely on online plat- forms for communication and transaction, ensuring the security of personal information has become increasingly vital. This research report focuses on the ongoing development and analysis of an extension aimed at enhancing users’ security on the web. The objective of this study is to investigate the potential adoption and user perception of the extension, which provides users with detailed certificate information to make informed decisions about their online interactions.

Alison Menezes, Clemson University [Mentors: Professor Caiwen Ding and Professor Omer Khan]

Deep Leakage from Gradients on GNNs

Gradient sharing is a widely used method that allows multiple clients to train a single neural network collab- oratively without exposing sensitive client data. The gradients were thought to be completely secure; however, it has been demonstrated to be possible to reconstruct the training data using the publicly available gradients on CNN architectures. We attempt to apply the same algorithm to a GNN architecture to show that gradient sharing is not completely secure for GNNs. We attempt to reconstruct the training data using the publicly available gradients. We also explore defense strategies for making the gradients more secure and evaluate their effectiveness. We evaluate the effectiveness of noisy gradients in defending against the reconstruction attack. We also experiment with different initializers to determine how general our attack method is.