UMBC Center for Information Security and Assurance (CISA)

Permanent URI for this collectionhttp://hdl.handle.net/11603/12245

In 2001, Alan Sherman created the UMBC Center for Information Security and Assurance (CISA), UMBC’s original and ongoing cybersecurity center, which is responsible for UMBC gaining recognition from NSA and DHS as a national center of academic excellence in cyber defense education and research (CAE, CAE-R).

It has a threefold mission to promote cybersecurity research, education, and best practices. It fosters collaborations with local area businesses, laboratories, and government agencies and promotes interdepartmental interaction among departments and offices. Projects include, but are not restricted to, secure voting, trustworthy computing, cryptology, network security, security of cloud computing, security education, protocol analysis, and digital forensics.

CISA includes the UMBC Cyber Defense Lab (CDL), which is pursuing a variety of research projects, including protocol analysis (funded by NSA) and cybersecurity educational assessments (funded by NSF). CDL meets biweekly at which members and guests present on-going research projects.

Browse

Recent Submissions

Now showing 1 - 20 of 24
  • Item
    Project-Based Learning Continues to Inspire Cybersecurity Students: The 2018–2019 SFS Research Studies at UMBC
    (ACM) Golaszewski, Enis; Sherman, Alan T.; Oliva, Linda; Peterson, Peter A. H.; Bailey, Michael R.; Bohon, Scott; Bonyadi, Cyrus; Borror, Casey; Coleman, Ryan; Flenner, Johannah; Enamorado, Elias; Eren, Maksim E.; Khan, Mohammad; Larbi, Emmanuel; Marshall, Kyle; Morgan, William; Mundy, Lauren; Onana, Gabriel; Orr, Selma Gomez; Parker, Lauren; Pinkney, Caleb; Rather, Mykah; Rodriguez, Jimmy; Solis, Bryan; Tete, Wubnyonga; Tsega, Tsigereda B.; Valdez, Edwin; Varga, Charles K.; Weber, Brian; Wnuk-Fink, Ryan; Yonkeu, Armand; Zetlmeisl, Lindsay; Doyle, Damian; O'Brien, Casey; Roundy, Joseph; Suess, Jack
  • Item
    Formal Methods Analysis of the Secure Remote Password Protocol
    (Springer International Publishing, 2020-03-16) Sherman, Alan T.; Lanus, Erin; Liskov, Moses; Zieglar, Edward; Chang, Richard; Golaszewski, Enis; Wnuk-Fink, Ryan; Bonyadi, Cyrus J.; Yaksetig, Mario; Blumenfeld, Ian
    We analyze the Secure Remote Password (SRP) protocol for structural weaknesses using the Cryptographic Protocol Shapes Analyzer (CPSA) in the first formal analysis of SRP (specifically, Version 3). SRP is a widely deployed Password Authenticated Key Exchange (PAKE) protocol used in 1Password, iCloud Keychain, and other products. As with many PAKE protocols, two participants use knowledge of a pre-shared password to authenticate each other and establish a session key. SRP aims to resist dictionary attacks, not store plaintext-equivalent passwords on the server, avoid patent infringement, and avoid export controls by not using encryption. Formal analysis of SRP is challenging in part because existing tools provide no simple way to reason about its use of the mathematical expression v+gᵇ modq. Modeling v+gᵇ as encryption, we complete an exhaustive study of all possible execution sequences of SRP. Ignoring possible algebraic attacks, this analysis detects no major structural weakness, and in particular no leakage of any secrets. We do uncover one notable weakness of SRP, which follows from its design constraints. It is possible for a malicious server to fake an authentication session with a client, without the client's participation. This action might facilitate an escalation of privilege attack, if the client has higher privileges than does the server. We conceived of this attack before we used CPSA and confirmed it by generating corresponding execution shapes using CPSA.
  • Item
    The SFS Summer Research Study at UMBC: Project-Based Learning Inspires Cybersecurity Students
    (2018-11-12) Sherman, Alan; Golaszewski, Enis; LaFemina, Edward; Goldschen, Ethan; Khan, Mohammed; Mundy, Lauren; Rather, Mykah; Solis, Bryan; Tete, Wubnyonga; Valdez, Edwin; Weber, Brian; Doyle, Damian; O’Brien, Casey; Oliva, Linda; Roundy, Joseph; Suess, Jack
    May 30-June 2, 2017, Scholarship for Service (SFS) scholars at the University of Maryland, Baltimore County (UMBC) analyzed the security of a targeted aspect of the UMBC computer systems. During this hands-on study, with complete access to source code, students identified vulnerabilities, devised and implemented exploits, and suggested mitigations. As part of a pioneering program at UMBC to extend SFS scholarships to community colleges, the study helped initiate six students from two nearby community colleges, who transferred to UMBC in fall 2017 to complete their four-year degrees in computer science and information systems. The study examined the security of a set of "NetAdmin" custom scripts that enable UMBC faculty and staff to open the UMBC firewall to allow external access to machines they control for research purposes. Students discovered vulnerabilities stemming from weak architectural design, record overflow, and failure to sanitize inputs properly. For example, they implemented a record-overflow and code-injection exploit that exfiltrated the vital API key of the UMBC firewall. This report summarizes student activities and findings, and reflects on lessons learned for students, educators, and system administrators. Our students found the collaborative experience inspirational, students and educators appreciated the authentic case study, and IT administrators gained access to future employees and received free recommendations for improving the security of their systems. We hope that other universities can benefit from our motivational and educational strategy of teaming educators and system administrators to engage students in active project-based learning centering on focused questions about their university computer systems.
  • Item
    The PnchScan Voting System
    (2006-08-20) Carback, Rick; Chaum, David; Clark, Jeremy; Essex, Aleks; Fisher, Kevin; Hosp, Ben; Popoveniuc, Stefan; Robin, Jeremy
  • Item
    Private Virtual Infrastructure: A Model for Trustworthy Utility Cloud Computing
    Krautheim, F. John; Phatak, Dhananjay S.; Sherman, Alan T.
    Private Virtual Infrastructure is a security architecture for cloud computing which uses a new trust model to share the responsibility of security in cloud computing between the service provider and client, decreasing the risk exposure to both. Private Virtual Infrastructure is under control of the information owner while the cloud fabric is under control of the service provider. The Private Virtual Infrastructure architecture comprises a cluster of trusted computing fabric platforms that host virtual servers running an application server with a Locator Bot security service. The cloud Locator Bot pre-measures the cloud platform for security properties to determine the trustworthiness of the platform. The Locator Bot uses Trusted Execution Technology and virtual Trusted Platform Modules to pre-measure the target environment and securely provision the Private Virtual Infrastructure in the cloud thus protecting information by preventing data from being placed in malicious or untrusted environments. Private Virtual Infrastructure — including Locator Bot — provides organizations tools to maintain control of their information in the cloud and realize benefits of cloud computing, with assurance that their information is protected. This paper presents a cloud trust model, Private Virtual Infrastructure architecture, and a Locator Bot protocol in enough detail to support further analysis or implementation.
  • Item
    Punchscan: Introduction and System Definition of a High-Integrity Election System
    (2006-05) Fisher, Kevin; Carback, Richard; Sherman, Alan T.
    Punchscan is a unique hybrid paper/electronic voting system concept. As a receipt-based system, Punchscan provides high voter privacy and election integrity, yet it does not rely on the complex and fragile electronic voting machines found in many current implementations. In this paper, we define the Punchscan system and voting protocol, including the people, objects and events involved and the ways they interact. We also trace the flow of data throughout the election process. This definition will aid those implementing the Punchscan system, but also lays a foundation for critical analysis and discussion within the voting research community.
  • Item
    Developing and Delivering Hands-On Information Assurance Exercises: Experiences with the Cyber Defense Lab at UMBC
    (IEEE, 2005-06-10) Sherman, Alan T.; Roberts, Brian O.; Byrd, William E.; Baker, Matthew R.; Simmons, John
    In summer 2003, we developed four new hands-on information assurance educational exercises for use in the UMBC undergraduate and graduate curricula. Exercise topics comprise buffer overflow attacks, vulnerability scanning, password security and policy, and flaws in the Wired Equivalent Privacy (WEP) protocol. During each exercise, each student carries out structured activities using a laptop from a mobile cart that can be rolled into any classroom. These dedicated, isolated machines permit a student to make mistakes safely, even while acting as the system administrator, without adversely affecting any other user. Each exercise is organized in a modular fashion to facilitate varied use for different courses, levels, and available time. Our experiences delivering these exercises show that practical hands-on activities motivate students and enhance learning. In this paper we describe our exercises and share lessons learned, including the importance of careful planning, ethical considerations, the rapid obsolescence of tools, and the difficulty of including exercises in already busy courses.
  • Item
    Private Virtual Infrastructure for Cloud Computing
    (2009) Krautheim, F. John
    Cloud computing places an organization’s sensitive data in the control of a third party, introducing a significant level of risk on the privacy and security of the data. We propose a new management and security model for cloud computing called the Private Virtual Infrastructure (PVI) that shares the responsibility of security in cloud computing between the service provider and client, decreasing the risk exposure to both. The PVI datacenter is under control of the information owner while the cloud fabric is under control of the service provider. A cloud Locator Bot pre-measures the cloud for security properties, securely provisions the datacenter in the cloud, and provides situational awareness through continuous monitoring of the cloud security. PVI and Locator Bot provide the tools that organizations require to maintain control of their information in the cloud and realize the benefits of cloud computing.
  • Item
    Punchscan in Practice: An E2E Election Case Study
    Essex, Aleks; Clark, Jeremy; Carback, Rick; Popoveniuc, Stefan
    This paper presents a case study of the E2E voting system Punchscan and its first use in a binding election. The election was held in March 2007 at the University of Ottawa for several offices within the university’s graduate student association. This case study presents a walkthrough of the election and offers discussion as to how the voters and poll workers responded to the Punchscan system, with implications to E2E systems in general.
  • Item
    On the Independent Verification of a Punchscan Election
    (2007) Carback III, Richard T.; Clark, Jeremy; Essex, Aleks; Popoveniuc, Stefan
    Punchscan is a cryptographic voting system providing full transparency throughout the entire election process: a mandatory pre-election public audit, a mandatory post-election public audit, and the ability for a voter to check the correct printing and recorded marks on a paper receipt she keeps. Even though a voter can verify that her vote is counted as she cast it, the ballot receipt does not contain enough information to show someone else how she voted. These unique properties produce a system with a voluntary and universally available process that establishes an overwhelmingly high statistical degree of confidence in the integrity of the outcome—in other words, they allow for unparalleled independent verification of election results. These ideas are new and have the potential to radically change the way we think about and build the voting systems of the future.
  • Item
    Scantegrity II Municipal Election at Takoma Park: The First E2E Binding Governmental Election with Ballot Privacy
    (2009-11-03) Carback, Richard; Chaum, David; Clark, Jeremy; Conway, John; Essex, Aleksander; Herrnson, Paul S.; Mayberry, Travis; Popoveniuc, Stefan; Rivest, Ronald L.; Shen, Emily; Sherman, Alan T.; Vora, Poorvi L.
    On November 3, 2009, voters in Takoma Park, Maryland, cast ballots for the mayor and city council members using the Scantegrity II voting system—the first time any end-to-end (E2E) voting system with ballot privacy has been used in a binding governmental election. This case study describes the various efforts that went into the election—including the improved design and implementation of the voting system, streamlined procedures, agreements with the city, and assessments of the experiences of voters and poll workers. The election, with 1728 voters from six wards, involved paper ballots with invisible-ink confirmation codes, instant-runoff voting with write-ins, early and absentee (mail-in) voting, dual-language ballots, provisional ballots, privacy sleeves, any-which-way scanning with parallel conventional desktop scanners, end-to-end verifiability based on optional web-based voter verification of votes cast, a full hand recount, thresholded authorities, three independent outside auditors, fully-disclosed software, and exit surveys for voters and pollworkers. Despite some glitches, the use of Scantegrity II was a success, demonstrating that E2E cryptographic voting systems can be effectively used and accepted by the general public.
  • Item
    An Examination of Vote Verification Technologies: Findings and Experiences from the Maryland Study
    (2006-04-15) Sherman, Alan T.; Gangopadhyay, Aryya; Holden, Stephen H.; Karabatis, George; Koru, A. Gunes; Law, Chris M.; Norris, Donald F.; Pinkston, John; Sears, Andrew; Zhang, Dongsong
    We describe our findings and experiences from our technical review of vote verification systems for the Maryland State Board of Elections (SBE). The review included the following four systems for possible use together with Maryland’s existing Diebold AccuVote-TS (touch screen) voting system: VoteHere Sentinel; SCYTL Pnyx.DRE; MIT-Selker audio system; Diebold voter verified paper audit trail. As a baseline, we also examined the SBE’s procedures for “parallel testing” of its Diebold system. For each system, we examined how it enables voters who use touch screens to verify that their votes are cast as intended, recorded as cast, and reported as recorded. We also examined how well it permits post-election auditing. To this end, we considered implementation, impact on current state voting processes and procedures, impact on voting, functional completeness, security against fraud, attack and failure, reliability, accessibility, and voter privacy.
  • Item
    Punchscan with Independent Ballot Sheets: Simplifying Ballot Printing and Distribution with Independently Selected Ballot Halves
    (2007-06-15) Carback III, Richard T.; Popoveniuc, Stefan; Sherman, Alan T.; Chaum, David
    We propose and implement a modification to the Punchscan protocol that simplifies ballot printing and distribution. In this improved version, each voter creates a ballot at the polling location by combining independently selected ballot halves, rather than using two pre-selected halves with the same serial number. The only time a ballot used for voting is human-readable is when it is in the voter’s hands, reducing possible opportunities to violate voter privacy. This small but nontrivial change lets election officials print and distribute ballots using multiple printers more easily, without giving any one printer the ability to compromise voter privacy with certainty.
  • Item
    Catching the Cuckoo: Verifying TPM Proximity Using a Quote Timing Side-Channel
    (Springer, Berlin, Heidelberg, 2011-06-22) Fink, Russell A.; Sherman, Alan T.; Mitchell, Alexander O.; Challener, David C.
    We present a Trusted Platform Module (TPM) application protocol that detects a certain man in the middle attack where an adversary captures and replaces a legitimate computing platform with an imposter that forwards platform authentication challenges to the captive over a high speed data link. This revised Cuckoo attack allows the imposter to satisfy a user's query of platform integrity, tricking the user into divulging sensitive information to the imposter. Our protocol uses an ordinary smart card to verify the platform boot integrity through TPM quote requests, and to verify TPM proximity by measuring TPM tickstamp times required to answer the quotes. Quotes not answered in an expected amount of time may indicate the presence of an imposter's data link, revealing the Cuckoo attack. We describe a timing model for the Cuckoo attack, and summarize experimental results that demonstrate the feasibility of using timing to detect the Cuckoo attack over practical levels of adversary link speeds.
  • Item
    Scantegrity II: End-to-End Verifiability for Optical Scan Election Systems using Invisible Ink Confirmation Codes
    (2008-07-01) Chaum, David; Carback, Richard; Clark, Jeremy; Essex, Aleksander; Popoveniuc, Stefan; Rivest, Ronald L.; Ryan, Peter Y. A.; Shen, Emily; Sherman, Alan T.
    We introduce Scantegrity II, a practical enhancement for optical scan voting systems that achieves increased election integrity through the novel use of confirmation codes printed on ballots in invisible ink. Voters mark ballots just as in conventional optical scan but using a special pen that develops the invisible ink. Verifiability of election integrity is end-to-end, allowing voters to check that their votes are correctly included (without revealing their votes) and allowing anyone to check that the tally is computed correctly from the included votes. Unlike in the original Scantegrity, dispute resolution neither relies on paper chits nor requires election officials to recover particular ballot forms. Scantegrity II works with either precinct-based or central scan systems. The basic system has been implemented in open-source Java with off-the-shelf printing equipment and has been tested in a small election. An enhancement to Scantegrity II keeps ballot identification and other unique information that is revealed to the voter in the booth from being learned by persons other than the voter. This modification achieves privacy that is essentially equivalent to that of ordinary paper ballot systems, allowing manual counting and recounting of ballots.
  • Item
    A Conjunction, Language, and System Facets for Private Packet Filtering
    (ASE, 2013) Oehler, Michael; Phatak, Dhananjay S.; Sherman, Alan T.
    Our contribution de nes a conjunction operator for private stream searching, integrates this operator into a high level language, and describes the system facets that achieve a realization of private packet ltering. Private stream searching uses an encrypted lter to conceal search terms, processes a search without decrypting the lter, and saves encrypted results to an output bu er. Our conjunction operator is processed as a bitwise summation of hashed keyword values and as a reference into the lter. The operator thus broadens the search capability, and does not increase the complexity of the private search system. When integrated into the language, cyber defenders can lter packets using sensitive attack indicators, and gain situational awareness without revealing those sensitive indicators.
  • Item
    Statistical Techniques for Language Recognition: An Introduction and Guide for Cryptanalysts
    (Taylor & Francis Online, 2010-06-04) Ganesan, Ravi; Sherman, Alan T.
    We explain how to apply statistical techniques to solve several language-recognition problems that arise in cryptanalysis and other domains. Language recognition is important in cryptanalysis because, among other applications, an exhaustive key search of any cryptosystem from ciphertext alone requires a test that recognizes valid plaintext. Written for cryptanalysts, this guide should also be helpful to others as an introduction to statistical inference on Markov chains. Modeling language as a finite stationary Markov process, we adapt a statistical model of pattern recognition to language recognition. Within this framework we consider four well-defined language-recognition problems: 1) recognizing a known language, 2) distinguishing a known language from uniform noise, 3) distinguishing unknown 0th-order noise from unknown lst-order language, and 4) detecting non-uniform unknown language. For the second problem we give a most powerful test based on the Neyman-Pearson Lemma. For the other problems, which typically have no uniformly most powerful tests, we give likelihood ratio tests. We also discuss the chi-squared test statistic X 2 and the Index of Coincidence IC. In addition, we point out useful works in the statistics and pattern-matching literature for further reading about these fundamental problems and test statistics.
  • Item
    Statistical Techniques For Language Recognition: An Empirical Study Using Real And Simulated English
    (Taylor & Francis, 2010-06-04) Ganesan, Ravi; Sherman, Alan T.
    Computer experiments compare the effectiveness of five test statistics at recognizing and distinguishing several types of real and simulated English strings. These experiments measure the statistical power and robustness of the test statistics X², ML, IND, S, and IC when applied to samples of everyday American English from the Brown Corpus and Wall Street Journal and to simulated English generated from lst-order Markov models based on these samples. An empirical approach is needed because the asymptotic theory of statistical inference on Markov chains does not apply to short strings drawn from natural language. Here, X² is the chi-squared test statistic; ML is a likelihood ratio test for recognizing a known language; IND is a likelihood ratio test for distinguishing unknown Oth-order noise from unknown lst-order language; S is a log-likelihood function that is a most-powerful test for distinguishing a known language from uniform noise; and IC is the index of coincidence. The test languages comprise four types of real English, two types of simulated lst-order English, and three types of noise. Two experiments characterize the distributions of these test statistics when applied to nine test languages, presented as strings of different lengths and contaminated with various amounts of noise. Experiment 1 varies the length of the string from 2 to 2¹⁷ characters. Experiment 2 adds uniform noise to samples of three fixed lengths (2⁴, 2⁷, 2¹⁰), with the amount of added noise ranging from 0% to 100%. These experiments assess the performance of the test statistics under realistic cryptographic constraints. Using graphs and tables of observed statistical power, we compare the effectiveness of the test statistics at distinguishing various pairs of languages at several critical levels. Although no statistic dominated all others for all critical levels and string lengths, each test performed well at its designated task. For distinguishing a known type of English from uniform noise at critical levels 0.1 through 0.0001, X 2 attained the highest power, with ML and S also performing well. For distinguishing uniform noise from a known type of English at the same critical levels, ML had the overall best performance, with IC, X², S, and IND also performing well. And for each of these tasks under noisy conditions, ML attained the highest power. In addition, through histograms we describe the actual distribution of each statistic on various language types. These detailed results, which show relationships between power, critical level, and string length, will help cryptanalysts and others apply statistical methods to practical language-recognition problems.
  • Item
    Design and implementation of FROST: Digital forensic tools for the OpenStack cloud computing platform
    (Elsevier B.V., 2013-08) Dykstra, Josiah; Sherman, Alan T.
    We describe the design, implementation, and evaluation of FROST|three new forensic tools for the OpenStack cloud platform. Operated through the management plane, FROST provides the rst dedicated forensics capabilities for OpenStack, an open-source cloud platform for private and public clouds. Our implementation supports an Infrastructure- as-a-Service (IaaS) cloud and provides trustworthy forensic acquisition of virtual disks, API logs, and guest rewall logs. Unlike traditional acquisition tools, FROST works at the cloud management plane rather than interacting with the operating system inside the guest virtual machines, thereby requiring no trust in the guest machine. We assume trust in the cloud provider but FROST overcomes non-trivial challenges of remote evidence integrity by storing log data in hash trees and returning evidence with cryptographic hashes. Our tools are user-driven, allowing customers, forensic examiners, and law enforcement to conduct investigations without necessitating interaction with the cloud provider. We demonstrate through examples how forensic investigators can independently use our new features to obtain forensically- sound data. Our evaluation demonstrates the e ectiveness of our approach to scale in a dynamic cloud environment. The design supports an extensible set of forensic objectives, including the future addition of other data preservation, discovery, real-time monitoring, metrics, auditing, and acquisition capabilities.
  • Item
    Acquiring Forensic Evidence from Infrastructure-as-a-Service Cloud Computing: Exploring and Evaluating Tools, Trust, and Techniques
    (2012-08-06) Dykstra, Josiah; Sherman, Alan T.
    We expose and explore technical and trust issues that arise in acquiring forensic evidence from infrastructure-as-aservice cloud computing and analyze some strategies for addressing these challenges. First, we create a model to show the layers of trust required in the cloud. Second, we present the overarching context for a cloud forensic exam and analyze choices available to an examiner. Third, we provide for the first time an evaluation of popular forensic acquisition tools including Guidance EnCase and AccesData Forensic Toolkit, and show that they can successfully return volatile and non-volatile data from the cloud. We explain, however, that with those techniques judge and jury must accept a great deal of trust in the authenticity and integrity of the data from many layers of the cloud model. In addition, we explore four other solutions for acquisition—Trusted Platform Modules, the management plane, forensics as a service, and legal solutions, which assume less trust but require more cooperation from the cloud service provider. Our work lays a foundation for future development of new acquisition methods for the cloud that will be trustworthy and forensically sound. Our work also helps forensic examiners, law enforcement, and the court evaluate confidence in evidence from the cloud.