The High 10 Weblog Posts of 2023


Each January on the SEI Weblog, we current the ten most-visited posts of the earlier 12 months. This 12 months’s prime 10 highlights our work in quantum computing, software program modeling, massive language fashions, DevSecOps, and synthetic intelligence. The posts, which had been revealed between January 1, 2023, and December 31, 2023, are introduced under in reverse order based mostly on the variety of visits.

#10 Contextualizing Finish-Consumer Wants: The right way to Measure the Trustworthiness of an AI System

by Carrie Gardner, Katherine-Marie Robinson, Carol J. Smith, and Alexandrea Steiner

As potential purposes of synthetic intelligence (AI) proceed to increase, the query stays: will customers need the expertise and belief it? How can innovators design AI-enabled merchandise, providers, and capabilities which can be efficiently adopted, reasonably than discarded as a result of the system fails to fulfill operational necessities, reminiscent of end-user confidence? AI’s promise is sure to perceptions of its trustworthiness.

To highlight just a few real-world situations, think about:

  • How does a software program engineer gauge the trustworthiness of automated code era instruments to co-write useful, high quality code?
  • How does a physician gauge the trustworthiness of predictive healthcare purposes to co-diagnose affected person circumstances?
  • How does a warfighter gauge the trustworthiness of computer-vision enabled menace intelligence to co-detect adversaries?

What occurs when customers don’t belief these methods? AI’s skill to efficiently associate with the software program engineer, physician, or warfighter in these circumstances is determined by whether or not these finish customers belief the AI system to associate successfully with them and ship the end result promised. To construct applicable ranges of belief, expectations have to be managed for what AI can realistically ship.

This weblog put up explores main analysis and classes realized to advance dialogue of how you can measure the trustworthiness of AI so warfighters and finish customers normally can understand the promised outcomes.

Learn the put up in its entirety.

#9 5 Greatest Practices from Trade for Implementing a Zero Belief Structure

by Matthew Nicolai, Nathaniel Richmond, and Timothy Morrow

Zero belief (ZT) structure (ZTA) has the potential to enhance an enterprise’s safety posture. There may be nonetheless appreciable uncertainty in regards to the ZT transformation course of, nonetheless, in addition to how ZTA will finally seem in follow. Latest government orders M-22-009 and M-21-31 have accelerated the timeline for zero belief adoption within the federal sector, and lots of non-public sector organizations are following swimsuit. In response to those government orders, researchers at the SEI’s CERT Division hosted Zero Belief Trade Days in August 2022 to allow business stakeholders to share details about implementing ZT.

On this weblog put up, which we tailored from a white paper, we element 5 ZT greatest practices recognized in the course of the two-day occasion, focus on why they’re vital, and supply SEI commentary and evaluation on methods to empower your group’s ZT transformation.

Learn the put up in its entirety.

#8 The Problem of Adversarial Machine Studying

by Matt Churilla, Nathan M. VanHoudnos, and Robert W. Beveridge

Think about using to work in your self-driving automotive. As you method a cease signal, as a substitute of stopping, the automotive accelerates and goes via the cease signal as a result of it interprets the cease signal as a velocity restrict signal. How did this occur? Regardless that the automotive’s machine studying (ML) system was educated to acknowledge cease indicators, somebody added stickers to the cease signal, which fooled the automotive into considering it was a 45-mph velocity restrict signal. This easy act of placing stickers on a cease signal is one instance of an adversarial assault on ML methods.

On this SEI Weblog put up, I study how ML methods will be subverted and, on this context, clarify the idea of adversarial machine studying. I additionally study the motivations of adversaries and what researchers are doing to mitigate their assaults. Lastly, I introduce a fundamental taxonomy delineating the methods during which an ML mannequin will be influenced and present how this taxonomy can be utilized to tell fashions which can be strong towards adversarial actions.

Learn the put up in its entirety.

#7 Play it Once more Sam! or How I Realized to Love Massive Language Fashions

by Jay Palat

“AI is not going to substitute you. An individual utilizing AI will.”

-Santiago @svpino

In our work as advisors in software program and AI engineering, we are sometimes requested in regards to the efficacy of AI code assistant instruments like Copilot, GhostWriter, or Tabnine based mostly on massive language mannequin (LLM). Latest innovation within the constructing and curation of LLMs demonstrates highly effective instruments for the manipulation of textual content. By discovering patterns in massive our bodies of textual content, these fashions can predict the subsequent phrase to jot down sentences and paragraphs of coherent content material. The priority surrounding these instruments is powerful – from New York faculties banning the usage of ChatGPT to Stack Overflow and Reddit banning solutions and artwork generated from LLMs. Whereas many purposes are strictly restricted to writing textual content, just a few purposes discover the patterns to work on code, as nicely. The hype surrounding these purposes ranges from adoration (“I’ve rebuilt my workflow round these instruments”) to worry, uncertainty, and doubt (“LLMs are going to take my job”). Within the Communications of the ACM, Matt Welsh goes as far as to declare we’ve reached “The Finish of Programming.” Whereas built-in growth environments have had code era and automation instruments for years, on this put up I’ll discover what new developments in AI and LLMs imply for software program growth.

Learn the put up in its entirety.

#6 The right way to Use Docker and NS-3 to Create Practical Community Simulations

by Alejandro Gomez

Generally, researchers and builders must simulate numerous kinds of networks with software program that might in any other case be arduous to do with actual units. For instance, some {hardware} will be arduous to get, costly to arrange, or past the talents of the group to implement. When the underlying {hardware} isn’t a priority however the important features that it does is, software program generally is a viable different.

NS-3 is a mature, open-source networking simulation library with contributions from the Lawrence Livermore Nationwide Laboratory , Google Summer season of Code, and others. It has a excessive diploma of functionality to simulate numerous sorts of networks and user-end units, and its Python-to-C++ bindings make it accessible for a lot of builders.

In some instances, nonetheless, it is not adequate to simulate a community. A simulator would possibly want to check how information behaves in a simulated community (i.e., testing the integrity of Consumer Datagram Protocol (UDP) site visitors in a Wi-Fi community, how 5G information propagates throughout cell towers and consumer units, and so forth. NS-3 permits such sorts of simulations by piping information from faucet interfaces (a function of digital community units offered by the Linux kernel that cross ethernet frames to and from consumer house) into the working simulation.

This weblog put up presents a tutorial on how one can transmit stay information via an NS-3-simulated community with the added benefit of getting the data-producing/data-receiving nodes be Docker containers. Lastly, we use Docker Compose to automate advanced setups and make repeatable simulations in seconds.

Learn the put up in its entirety.

#5 5 Challenges to Implementing DevSecOps and The right way to Overcome Them

by Joe Yankel and Hasan Yasar

Traditionally, software program safety has been addressed on the venture stage, emphasizing code scanning, penetration testing, and reactive approaches for incident response. Just lately, nonetheless, the dialogue has shifted to this system stage to align safety with enterprise aims. The best end result of such a shift is one during which software program growth groups act in alignment with enterprise objectives, organizational danger, and answer architectures, and these groups perceive that safety practices are integral to enterprise success. DevSecOps, which builds on DevOps ideas and locations extra give attention to safety actions all through all phases of the software program growth lifecycle (SDLC), will help organizations understand this excellent state. Nevertheless, the shift from project- to program-level considering raises quite a few challenges. In our expertise, we’ve noticed 5 widespread challenges to implementing DevSecOps. This SEI Weblog put up articulates these challenges and offers actions organizations can take to beat them.

Learn the put up in its entirety.

#4 Utility of Massive Language Fashions (LLMs) in Software program Engineering: Overblown Hype or Disruptive Change?

by Ipek Ozkaya, Anita Carleton, John E. Robert, and Douglas Schmidt (Vanderbilt College)

Has the day lastly arrived when massive language fashions (LLMs) flip us all into higher software program engineers? Or are LLMs creating extra hype than performance for software program growth, and, on the similar time, plunging everybody right into a world the place it’s arduous to differentiate the superbly fashioned, but typically faux and incorrect, code generated by synthetic intelligence (AI) packages from verified and well-tested methods?

This weblog put up, which builds on concepts launched within the IEEE paper Utility of Massive Language Fashions to Software program Engineering Duties: Alternatives, Dangers, and Implications by Ipek Ozkaya, focuses on alternatives and cautions for LLMs in software program growth, the implications of incorporating LLMs into software-reliant methods, and the areas the place extra analysis and improvements are wanted to advance their use in software program engineering.

Learn the put up in its entirety.

#3 Rust Vulnerability Evaluation and Maturity Challenges

by Garret Wassermann and David Svoboda

Whereas the reminiscence security and safety features of the Rust programming language will be efficient in lots of conditions, Rust’s compiler could be very specific on what constitutes good software program design practices. Every time design assumptions disagree with real-world information and assumptions, there’s the potential for safety vulnerabilities–and malicious software program that may benefit from these vulnerabilities. On this put up, we are going to give attention to customers of Rust packages, reasonably than Rust builders. We are going to discover some instruments for understanding vulnerabilities whether or not the unique supply code is offered or not. These instruments are vital for understanding malicious software program the place supply code is commonly unavailable, in addition to commenting on doable instructions during which instruments and automatic code evaluation can enhance. We additionally touch upon the maturity of the Rust software program ecosystem as a complete and the way that may impression future safety responses, together with through the coordinated vulnerability disclosure strategies advocated by the SEI’s CERT Coordination Middle (CERT/CC). This put up is the second in a collection exploring the Rust programming language. The first put up explored safety points with Rust.

Learn the put up in its entirety.

#2 Software program Modeling: What to Mannequin and Why

by John McGregor and Sholom G. Cohen

Mannequin-based methods engineering (MBSE) environments are supposed to assist engineering actions of all stakeholders throughout the envisioning, creating, and sustaining phases of software-intensive merchandise. Fashions, the machine-manipulable representations and the merchandise of an MBSE setting, assist efforts such because the automation of standardized evaluation strategies by all stakeholders and the upkeep of a single authoritative supply of fact about product info. The mannequin faithfully represents the ultimate product in these attributes of curiosity to varied stakeholders. The result’s an general discount of growth dangers.

When initially envisioned, the necessities for a product could seem to signify the best product for the stakeholders. Throughout growth, nonetheless, the as-designed product involves replicate an understanding of what’s actually wanted that’s superior to the unique set of necessities. When it’s time to combine parts, throughout an early incremental integration exercise or a full product integration, the unique set of necessities is not represented and is not a sound supply of check instances. Many questions come up, reminiscent of

  • How do I consider the failure of a check?
  • How can I consider the completeness of a check set?
  • How do I monitor failures and the fixes utilized to them?
  • How do I do know that fixes utilized don’t break one thing else?

Such is the case with necessities, and far the identical must be the case for a set of fashions created throughout growth—are they nonetheless consultant of the applied product present process integration?

One of many objectives for strong design is to have an up-to-date single authoritative supply of fact during which discipline-specific views of the system are created utilizing the identical mannequin parts at every growth step. The only authoritative supply will typically be a set of requirement, specification, and design submodels throughout the product mannequin. The ensuing mannequin can be utilized as a sound supply of full and proper verification and validation (V&V) actions. On this put up, we study the questions above and different questions that come up throughout growth and use the solutions to explain modeling and evaluation actions.

Learn the put up in its entirety.

#1 Cybersecurity of Quantum Computing: A New Frontier

by Tom Scanlon

Analysis and growth of quantum computer systems continues to develop at a fast tempo. The U.S. authorities alone spent greater than $800 million on quantum info science (QIS) analysis in 2022. The promise of quantum computer systems is substantial – they’ll be capable of remedy sure issues which can be classically intractable, that means a traditional pc can’t full the calculations inside human-usable timescales. Given this computational energy, there’s rising dialogue surrounding the cyber threats quantum computer systems could pose sooner or later. As an illustration, Alejandro Mayorkas, secretary of the Division of Homeland Safety, has recognized the transition to post-quantum encryption as a precedence to make sure cyber resilience. There may be little or no dialogue, nonetheless, on how we are going to shield quantum computer systems sooner or later. If quantum computer systems are to develop into such beneficial belongings, it’s cheap to venture that they’ll ultimately be the goal of malicious exercise.

I used to be just lately invited to be a participant within the Workshop on Cybersecurity of Quantum Computing, co-sponsored by the Nationwide Science Basis (NSF) and the White Home Workplace of Science and Expertise Coverage, the place we examined the rising discipline of cybersecurity for quantum computing. Whereas quantum computer systems are nonetheless nascent in some ways, it’s by no means too early to deal with looming cybersecurity issues. This put up will discover points associated to creating the self-discipline of cyber safety of quantum computing and description six areas of future analysis within the discipline of quantum cybersecurity.

Learn the put up in its entirety.

Wanting Forward in 2024

We publish a brand new put up on the SEI Weblog each Monday morning. Within the coming months, search for posts highlighting the SEI’s work in synthetic intelligence, cybersecurity, and edge computing.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here

Stay on op - Ge the daily news in your inbox