The Tenth Workshop on…
Fault-Tolerant Spaceborne Computing Employing New Technologies, 2017
SCHEDULE
Location guide: All activities OUTSIDE the red outline are at the Marriott (I-40 and Lousiana). The main meeting place for the meeting is at the CSRI building near Sandia labs. The closed sessions will be elsewhere at Sandia, but meet at CSRI.
Tuesday, May 30, 2017 Wednesday, May 31, 2017 Thursday, June 01, 2017 Friday, June 02, 2017
8:00 AM Region outlined in red at Sandia CSRI/90 --> Breakfast (Sandia) Breakfast (Sandia) Breakfast (Sandia)
8:15 AM            
8:30 AM Legend Introduction by Organizers Lew Cohn, OGA, Status of programs User group 7 MUG (Sandia) Maestro User Group (POC Bancroft/Crago)
8:45 AM Administrative and meals Peter Kogge, EMU, Sparsity, Irregularity, and Streaming: the Need for Innovation in Architecture
9:00 AM Working groups Harold Schone, NASA/JPL, Govt, Smallsat Effort
9:15 AM Presentation session
9:30 AM Closed session Mitch Fletcher, ArrowheadSE, System Engineering Jon Ballast, Boeing, HPSC Program Overview
9:45 AM    
10:00 AM P. Collier (presented by R. Berger), Navy, SpaceVPX J. P. Walters, ISI, HPSC software
10:15 AM
10:30 AM R. Berger, BAE, SpaceVPX-based heterogeneous modular architecture Jim Lyke, ARFL, Energy Consequences of Information for Spacecraft
10:45 AM
11:00 AM Kevin Butler, UF, TBD Ian Troxel, Troxel Aerospace, Cubesat Processor
11:15 AM
11:30 AM J. Grabowski, Syracuse, Neural research update John Samson, Moorehead State, DM7
11:45 AM
12:00 PM Note: Please do not go to Sandia this day; go to the Marriott. There will be a registration desk from 12:30 or so Lunch (Sandia, provided) Lunch (Sandia, provided) Limited lunch (Sandia, provided)
12:15 PM    
12:30 PM    
12:45 PM    
1:00 PM Working group 1 (hotel) Cubesats and Smallsats (John Bellardo and John Samson) Working group 2 (hotel) Applications (Marti Bancroft) Ken O'Neill, MicroSemi, RISC-V open & roadmap; RTG4 Closed session (meet at CSRI building and carpool or caravan to a different location)    
1:15 PM  
1:30 PM David Lee, Sandia, Commercial FPGAs for Space  
1:45 PM  
2:00 PM Chris Wilson, UFl, In-Flight Experiences with the CSP Hybrid Space Computer  
2:15 PM  
2:30 PM Ran Ginosar, Ramon Chips, RC64 High Performance Rad-Hard Manycore and System  
2:45 PM  
3:00 PM Break (provided, hotel) Break (provided)  
3:15 PM          
3:30 PM Working group 1 continued Working group 2 continued Brandon Eames, Sandia National Labs, Trust Analysis for Space Systems:  Modeling the Development of a Spacecraft System Payload  
3:45 PM  
4:00 PM (Cancelled. Stand by.)  
4:15 PM  
4:30 PM Thomas Llanso, JHUAPL, Challenges to achieving space mission resilience to cyber attack  
4:45 PM  
5:00 PM     Wallace Bow, Sandia, Neural network/autonomy via nVidia  
5:15 PM Reception (hotel)  
5:30 PM     Drive to hotel; socialize in lobby Dinner on your own
5:45 PM        
6:00 PM        
6:15 PM        
6:30 PM        
6:45 PM        
7:00 PM Working group 3 (hotel) Security I (Ken Heffner) Working group 4 (hotel) Architecture (Steve Horan) Dinner served   Working group 5 (hotel) Security II (Ken Heffner) Working group 6 (hotel) Memory (Jean Yang-Scharlotta)
7:15 PM    
7:30 PM    
7:45 PM Dinner speaker: Havard Grip, JPL, Mars Helicopter
8:00 PM
8:15 PM
8:30 PM
8:45 PM
9:00 PM
9:15 PM
9:30 PM
9:45 PM
10:00 PM
1. Cubesats and Smallsats
Leads: John Bellardo and John Samson
1:00 PM Roundtable discussion
3:00 PM Break (provided, hotel)
3:30 PM Roundtable discussion (continued)
4:00 PM
5:15 PM
2. Applications
Lead: Marti Bancroft
1:00 PM Marti Bancroft Introduction
1:10 PM Seung Chung and Lorraine Fesq, JPL Toward Understanding Spacecraft Autonomy Computing Needs
1:35 PM Lukas Mandrake, Jack Lightholder Machine Learning, Analytics & Autonomy
2:00 PM Carolina Restrepo, Carlos Villalpando ALHAT
2:25 PM Darrel Raines (JSC) Toward a Spacecraft Auxiliary Computer Design
2:50 PM Break
3:05 PM Hester Yim HEOMD Applications in need of HPSC
3:30 PM Erik DeBenedictis 3D Chips, Architectural Opportunities
3:55 PM Ran Ginosar Supercomputing on Multiple Cores
4:20 PM   Discussion and Wrap-Up
4:45 PM
3. Security I
Lead. Ken Heffner (Honeywell)
7:00 PM Ken Heffner, HONEYWELL, Introduction - Space Enterprise Cyber Security Technologies
7:15 PM
7:30 PM Dr. Kevin Butler, UF, TBD
7:45 PM
8:00 PM Thomas Llanso, JHU, Towards greater rigor and automation in space-cyber risk assessment
8:15 PM
8:30 PM Brandon Eames, Sandia, Trust Analysis for Space Systems:  Modeling the Development of a Spacecraft System Payload (different talk from plenary)
8:45 PM
9:00 PM Michel Sika, USC/ISI, Radiation Mitigation through Arithmetic Codes
9:15 PM
9:30 PM Erik DeBenedictis, Sandia, CREEPY RRNS processor
9:45 PM
10:00 PM Wrap up
10:15 PM
10:30 PM
4. Architecture
Lead: Steve Horan
7:00 PM Steve Horan Introduction
7:10 PM Bob Patti A New AI Centric Processor and Platform
7:35 PM Kent Dahlgren Deterministic Sensor Fusion with the RapidIO Space Device Extensions
8:00 PM Marti Bancroft Promethium Update
8:25 PM Steve Horan HPSC Ecosystem
8:50 PM Steve Horan Discussion and Wrap-up
9:05 PM (schedule compressed due to cancellation)  
9:20 PM
5. Security II
Lead: Ken Heffner (Honeywell)
7:00 PM Ken Heffner - Introduction Space Systems and Security Enabling Technology 
 
7:10 PM  
 
7:50 PM Mike Frank, Sandia,  Adiabatic CMOS demonstration LC ladder
8:30 PM Earle Jennings - Qsigma, Inc. New computer architecture to resist viruses and rootkits
9:10 PM
 
10:00 PM    
10:50 PM
6. Memory
Lead: Jean Yang-Scharlotta
7:00 PM Jean Yang-Scharlotta Introduction
7:10 PM Jon Slaughter Everspin MRAM Technology
7:35 PM Romney Katti Honeywell Rad Hard Memory Technology and Plans
8:00 PM Patrick Phelan Solid State Recorder Memory Technology
8:25 PM Jim Yamaguchi 3D Stacked "Managed Memory Stack"
8:50 PM Bob Patti An Update on 3D Memory
9:15 PM Dan Nakamura Advanced Memory Study: Use Cases and Requirements
9:40 PM
7. Maestro User Group (MUG)
UPDATE The Maestro Users Group will be an open forum for current and potential users of the rad-hard Maestro many-core processor for space. The Maestro processor is based on the commercial Tile64 processor, has 49 cores, and provides up to 25 GFLOPS and 50 GOPS. The MTUG meeting will be an informal and interactive meeting of developers and users to discuss the current state of Maestro hardware and software technology, applications, performance, flight prospects, systems, and user experiences
Lead: Steve Crago and Marti Bancroft
8:30 AM Welcome and Introductions JP Walters, Steve, Marti
8:40 AM Promethium Overview and Hardware Testing Update Marti Bancroft
9:20 AM Software update  JP Walters
9:50 AM MFE update Steve Crago
10:00 AM end of formal agenda, time for individual discussions if desired  
11:00 PM
Abstracts
Jon Ballast, Boeing NASA High Performance Spacecraft Computing (HPSC) Chiplet Development Program Overview
Boeing is the prime contractor for the NASA High Performance Spacecraft Computing (HPSC) Chiplet Development Program.  The HSPC Chiplet will enable game-changing spacecraft missions envisioned by NASA and the US Air Force by providing orders-of-magnitude improvement in processing performance and power efficiency, as well as flexibility to dynamically trade between processing throughput, power consumption, and fault tolerance to meet varying demands and priorities across missions and mission phases.  The objectives of the HPSC Chiplet program are: 1) to design, verify, fabricate and test a prototype HPSC radiation-hardened by design (RHBD) ARM-based multicore processor “Chiplet,” and 2) to develop the Chiplet software development ecosystem, including system software, software development tools, Chiplet emulators, and an evaluation board.  This presentation will summarize Boeing’s HPSC program plan, with an emphasis on Chiplet hardware development.
Richard Berger, BAE SpaceVPX-based heterogeneous modular architecture
The SpaceVPX standard was developed by a consortium of government and government contractors to provide a next-generation standard for scalable high performance fault-tolerant on-board space computing.  BAE Systems was and continues to be a leader in this effort and has committed to developing compliant and interoperable merchant market solutions for the industry.  This presentation discusses the state of that development, our reasoning behind it, the issues being addressed, and some plans for the future.  These module-level products leverage BAE Systems technology as well as that of others.
Kevin Butler Towards Firmware Vetting on Legacy Embedded Platforms (plenary)
Embedded systems play an increasing large role in computing, particularly in real-time and mission-critical environments. However, particularly with off-the-shelf components, there is no way to be able to accurately vet the integrity of such devices. Some devices contain firmware that is signed by the manufacturer; however, this merely demonstrated that an entity is in possession of a private key, and no actual attestation of device integrity is contained in such a signature.

We consider a case study of the Universal Serial Bus (USB) protocol as it is implemented in small embedded microcontrollers used in flash drives and small-scale human interface devices. Specifically, we examine the Intel 8051 microcontroller and demonstrate how our domain knowledge of the USB protocol can allow us to guide the symbolic analysis of firmwares on these devices to be able to efficiently determine the presence of malicious code paths. We develop targeting algorithms to efficiently perform these activities, and discuss the challenges of lifting legacy architectures into intermediate representations suitable for symbolic analysis, as well as confront the limitations of existing tools for analysis and present potential ways forward for improving these tools.
Kevin Butler Technical Deep Dive: Foundations of Hardware-Assisted Secure Multiparty Computation with SGX (working group)
Secure multiparty computation (SMC) ensures that two or more parties can jointly compute a function and receive an answer without revealing their inputs to any other party. It is a powerful means of preserving privacy and can form as the basis for further privacy-oriented cryptographic primitives such as obfuscation. While great strides have been made in reducing the cryptographic overheads required to perform these operations, they are still not capable of being performed in real time.
However, with the introduction of the new support for secure computation with the CPU, most notably though Intel’s Software Guard eXtensions (SGX), new opportunities to vastly improve the performance of SMC.

This talk considers a new hybrid protocol that allows for secure computation using a combination of SGX primitives and traditional circuit garbling mechanisms with a formal consideration of the different security models inhered to each computing paradigm. We demonstrate that two-party evaluation can be performed securely with regard to the particular partitioning of a function such that the traditional garbling phase maintains standard SMC guarantees while the SGX-enabled phase leaks nothing more than small amounts of intermediate information. When applied in practice, our scheme can lead up to 38X improvement in computational speed compared to traditional garbled circuits.
Seung Chung and Lorraine Fesq, JPL Toward Understanding Spacecraft Autonomy Computing Needs
This presentation looks at Spacecraft Autonomy in a broad sense, and establishes a framework that can be used to analyze computing needs for future autonomous systems. Autonomy can be viewed as a spectrum, from minimal autonomy that is limited to a few functions and operates under close human supervision, to more capable autonomy that controls critical functions and makes decisions even in the presence of uncertainty.  Autonomy can also be decomposed into layers, from the lowest level of vehicle state assessment to high level intent recognition of other autonomous agents.  Each of these levels is viewed along the Autonomy Spectrum, and decomposed into basic, common technologies used by many autonomy algorithms such as image recognition, machine learning, and planning. Finally, we examine each layer and provide a qualitative assessment of the limitations introduced by each of five computing metrics, namely processing speed, memory size, memory access rate, input/output access rate, and the number of cores available. We conclude with a collective view of the computational needs for all the layers and across the autonomy spectrum.
Kent Dahlgren, Praesum Deterministic Sensor Fusion with the RapidIO Space Device Extensions
This presentation will discuss how RapidIO can be used to transport sensor data with predictable bandwidth and timing characteristics. It will start with an overview of the latest RapidIO specification and the key features of the Space device extensions that support sensor data fusion. These include precision time synchronization, and structurally asymmetric links. In addition the paper will discuss deployment of RapidIO switching technology on state of the art SoC FPGAs to implement data reduction for high bandwidth sensor data.
Erik DeBenedictis, Sandia RRNS Processor
This is a report on a project by Bobin Deng and Sriseshan Srikanth of Ga Tech, support by Sandia. The talk discussed a processor design and simulation where the arithmetic is performed using a Redundant Residue Number System (RRNS). RRNS offers error detection and correction for logic as well as memory. This makes RRNS similar in effect to Triple Modular Redundancy (TMR), but RRNS has much lower overhead. The talk will very briefly explain RRNS, the processor architecture used for simulation, and simulation results. The conclusion is that an RRNS processor in combination with a variant of RHBD may be an option for spacecraft processors.
Erik DeBenedictis, Sandia 3D Chips, Architectural Opportunities
3D packaging without an architecture change can reduce SWaP by, say, 10x, but we will discuss how architectural changes can raise the benefit substantially. Current chips are either logic or memory, which makes algorithms that have a lot of back and forth dependencies between the two types of chips inefficient in terms of energy, bandwidth, or latency. We developed a architecture called "Superstrider" that implements an associative array computing paradigm using an algorithm depends on 3D for efficiency. The associative array paradigm is a low-level kernel that can apply to sparse linear algebra, graph analytics, and machine learning. Superstrider could be implemented in a spacecraft in the short term as a custom controller for High Bandwidth Memory. We simulated a scale-up path starting with implementation as a HBM controller and ending with fully-integrated 3D chips (using Stanford's N3XT proposal) and get speedups of 50x to over 1000x due to the 3D algorithm alone. 
Brandon Eames, Sandia On Trust Analysis for Microelectronics-Based Systems (Tuesday night talk)
Microelectronics-based systems pervade modern society, from small mobile devices, to vehicle automation, to large scale data centers, to military systems.  We depend on these systems not only for logistics and communication, but increasingly for safety and security.  Unfortunately, the pervasiveness of software-based exploits of commercial enterprises exposes the broad question of whether microelectronics-based systems can be trusted to perform their intended function when called upon.
The trust issue is pervasive and has proven elusive to structured science and engineering approaches that aspire to deflect malicious alterations.  In the absence of a system science supporting trust, developers employ opinion-based risk assessments, red team analysis, and system access denial to increase confidence that systems will perform as intended.  Confidence is purchased via certification, waiver and opinion-based analysis rather than quantifiable, engineering-based approaches to evaluate and endow trust.
In this presentation, we discuss the problem of trust and trust analysis, motivated in the context of microelectronics based system development.  We discuss tools and techniques that have been developed for evaluating system security, and show their applicability for evaluating trustworthiness of microelectronics based systems. Specifically, we discuss RIMES (Risk Informed Management of Enterprise Security), a relative risk assessment based technique for evaluating security that has been recently applied to trust evaluation.  We present a game theoretic technique for evaluating the effectiveness of moving target defenses called PLADD (Probabilistic, Learning Attacker, Dynamic Defender), and discuss its applicability as a basis for trust evaluation.
Michael Frank, Sandia Feasible demonstration of ultra-low-power adiabatic CMOS for cubesat applications using LC ladder resonators
Small space platforms such as cubesats are typically highly constrained in the power available for on-board computation, limiting the scope of achievable missions.  Unfortunately, conventional approaches to low-power computing in CMOS are limited in their energy efficiency, because they still follow the conventional irreversible computing paradigm, in which digital signals are destructively overwritten on every clock cycle, dissipating the associated CV2 signal energy to heat.  In an alternative approach called reversible computing, which can be implemented in rad-hard CMOS, we can adiabatically transform digital signals from old states to new ones with almost no dissipation of signal energy, instead recovering almost all of the signal energy and reusing it in subsequent operations.  At relatively low (MHz scale) frequencies, this approach can yield orders-of-magnitude gains in power-limited parallel performance compared to more conventional approaches to low-power CMOS.  In this paper, we propose a feasible near-term demonstration of reversible adiabatic CMOS at attojoule-per-operation energy scales, using custom LC ladder resonators integrated in-package with the logic IC to achieve high-quality energy recovery.

 
Mitch Fletcher, Arrowhead SE Back to the Basics: Can Architecture Framework Requirements Lead to a Better Architecture
Architecture Framework system engineering decomposition techniques have been around sine the 1940’s.  However, it seems that the jump to start a design based upon exiting technology blocks without considering the user operational requirements has dominated the methodology over the past two decades.  This trend is alarming in two ways:

1)      Defining a system without knowing ALL the requirements typically results in missing 5% to 10% of necessary requirements that are not identified until integration.  Requirement defined at integration cost more than an additional magnitude to implement than requirements defined prior to physical implementation

2)      When systems are specified and implemented based upon tribal knowledge, an additional 20%-30% requirements are implemented that are not required and provide no value added in the deployed system.  The cost of implementation, test, and verification of these requirements is staggering and drives the largest percentage of wasted cost on a project.

This talk will give a very brief overview of the latest industry best practice for system engineering: Department of Defense Architectural Framework (DoDAF) and examine how it can be applied to architecture of advanced fault tolerant computing systems.
Ran Ginosar, Ramon Chips RC64 High Performance Rad-Hard Manycore and System
RC64 is a rad-hard shared memory many-core DSP, designed for high-performance low-power space payloads. The chip is being fabricated on 65nm TSMC and prototypes are expected in July 2017. Several 3U-VPX cards are designed for RC64. Large “supercomputing” payloads combining tens of RC64 chips with mass memory and multiple interfaces are planned for telecom and imaging satellites. Software for developing applications has been demonstrated. Application prototypes under development include multi-modems, networking, SAR signal processing, radio processing and machine learning. Funding has been provided by Israel Government and the European Commission. 
Jessie Grabowski Neuromorphic Computing for classification in Optical and SAR data with the IBM TrueNorth and FPGAs
Neuromorphic processors model biological neural systems to implement computational architectures that have the potential to realize significant power savings over the traditional architectures of GPGPUs, CPUs, and microcontrollers. Hardware implementations of neuromorphic architectures can be especially well-suited for the application of machine learning algorithms, such as the forward pass of deep neural networks, in SWaP constrained embedded systems.  Here we present a summary of the power consumption and classification performance of deep convolutional neural networks and gradient boosted decision trees trained for object recognition in optical and Synthetic Aperture Radar (SAR) datasets operating on different hardware platforms including IBM’s TrueNorth chip and Xilinx’s Field Programmable Gate Arrays (FPGA).
Earle Jennings, QSigma, Inc. New computer architecture to resist viruses and rootkits
Today’s data centers, their handheld computers and network sensors, are discussed in terms of how they are penetrated by viruses and rootkits. A fundamentally new computer architecture is presented in terms of cores, and modules of cores, which can be used in handheld devices, network sensors, DSP, and HPC computers. These cores can be  implemented for compiler level application compatibility with an existing microprocessor. The core(s) can be proven semantically compatible with the existing microprocessor. This new computer architecture physically separates instruction memories from data-related memories, removing the possibility of data memory faults triggering the installation of viruses, rootkits and other malicious software. Application compatibility is insured by the semantic compatibility of the cores with the existing superscalar microprocessor. The superscalar interpreter is turned into a software utility acting on the compiler's output, thereby removing it from the hardware. This reduces both silicon surface area and energy consumption, by roughly a decimal order of magnitude. Communications, memory controllers, and memory devices throughout the data center, handheld computers and network sensors physically segregate task-instruction information from data-related information to further remove any opportunity for these hidden threats becoming installed threats.
David Lee, Sandia Leveraging Commercial FPGA Technologies for Space Flight Applications
Technology advancement is typically associated with Moore’s law which states that transistor count in an integrated circuit should roughly double every two years.  In general, commercial electronics have been able to achieve advancement at this rate; however, space-based electronics have experienced hindered growth due to costly and time-consuming measures that are necessary to qualify electronics for the harsh space environment. 
This fact has limited the availability of current space-qualified device offerings despite the growing need for more advanced satellite capabilities.  In particular, one class of processing device – the Field-Programmable Gate Array (FPGA) – plays a key role in numerous space missions; however, current space-qualified FPGAs are built with technology processes at least four generations behind the current state-of-the-art.  As such, modern space FPGAs have significantly lower performance and capability compared to commercially available devices. 
One possible approach to close this “technology gap” is to use cutting-edge commercial FPGAs in space, but the devices must operate in the harsh space environment.  As such, the effect of radiation on these devices is one consideration that must be characterized before the space community will accept commercial devices as a viable option for flight.  This presentation will summarize research performed over the last several years evaluating commercial FPGA technologies for future space flight applications, with a focus on radiation testing results across a range of modern Xilinx devices from their 28 nm 7-Series devices to the latest 16 nm FinFET-based UltraScale+.
Jack Lightholder, JPL Applications of Onboard Analytics and Machine Learning from the JPL Machine Learning and Instrument Autonomy Group
The Machine Learning and Instrument Autonomy (MLIA) at NASA’s Jet Propulsion Laboratory is tasked with bringing machine learning and data mining technology to the hard data problems encountered in the scientific, rigorous analysis of spacecraft instrument data.  These efforts have resulted in numerous terrestrial and onboard techniques for distilling data for science purposes and spacecraft operations.  These technologies fly on low Earth orbit missions, landed Mars missions and are currently being developed for deep space missions.  The Earth Observing One (EO-1), and the Intelligent Payload Experiment (IPEX) missions utilized machine learning algorithms and intelligent task scheduling software to optimize science return through the prioritization of images for downlink while automatically scheduling follow-up observations of targets of significant interest.  The Mars Science Laboratory, Curiosity, and Mars Exploration Rovers, both utilize the Autonomous Exploration for Gathering Increased Science (AEGIS) system which enables opportunistic data collection during periods where ground operations teams cannot be in the loop to command the rovers.  AEGIS allows for auto-targeting of the Chemistry Camera (ChemCam), based on an onboard analysis of navigation camera images.  Targets of opportunity are determined and follow-up observations are scheduled with the narrow-field ChemCam instrument.  The Near Earth Asteroid Scout (NEA Scout) CubeSat mission, launching in 2018 on the Space Launch System’s EM-1 mission, will utilize onboard data processing to facilitate target detection and data downlink optimization to overcome the inherent constraints of limited pointing accuracy, power, and data bandwidth, on small spacecraft platforms traveling far from Earth.  Machine learning and instrument automation technologies can support engineering and science operations for missions of all types. As space borne computing technology continues to advance, the opportunities for more computationally intensive algorithms to be placed onboard will continue to allow more flexibility in the way we operate spacecraft.
Thomas Llanso Towards greater rigor and automation in mission – cyber risk assessment (Tuesday evening)
The cyber risk to space missions is a serious and growing concern, especially as critical systems are increasingly interconnected.  While the space community has long dealt with a host of non-cyber threats (e.g., kinetic, space weather, jamming), the goal of quantifying mission risk due to cyber effects has been hampered by a lack of rigorous assessment approaches that can be applied at scale.  This talk provides an overview of on-going research called BluGen that is making progress in this area.  The various analytics employed by BluGen as well the accompanying cyber knowledge database will be discussed.  To make the presentation more concrete, a simplified space ground system example will be used to illustrate BluGen in use.
Thomas Llanso Challenges to achieving space mission resilience to cyber attack (Wednesday morning)
This talk summarizes a range of existing challenges to achieving space mission resilience to cyber attack.  Among the challenges discussed include breaking out of the compliance mindset, the cyber data problem, challenges associated with rigor in assessing mission impact and risk due to cyber effects, and the need for better threat characterization, mission-cyber situational awareness, and resilience frameworks
Jim Lyke, ARFL Energy consequences for information processing in spacecraft
Conventional wisdom in the spacecraft domain is that on-orbit computation is expensive, and thus, information is traditionally funneled to the ground as directly as possible. The explosion of information due to larger sensors, the advancements of Moore’s law, and other considerations lead us to revisit this practice. In this article, we consider the trade-off between computation, storage, and transmission, viewed as an energy minimization problem. This work drives us to consider the implications of the “Landauer limit” for energy dissipation of computation and storage, and we note the paucity of thermodynamic lower bound estimates for even simple algorithms. We appeal to the concept of renewing the long-considered possibility of using adiabatic approaches to meet (and transcend) the Landauer limit.  We also propose the use of energy-based accounting as a possible methodology for feature comparison/decision-making.   In examining this topic, we further comment on the extended footprint of energy consumption across not only the spacecraft, but throughout the elements of a space mission that interact with the spacecraft.
Ken O'Neill & Larrie Carr, MicroSemi RISC-V - An Open Instruction Set Architecture for the Future of Space Computing
Microprocessor IP cores for FPGA deployment have historically been constrained by off-shore RTL development and expensive licensing fees, and in some cases by inflexible selections of peripherals. The RISC-V open ISA brings opportunities for designers of national security space systems to develop embedded systems with smaller footprints, lower power consumption, flexible peripheral choices, and inspectable RTL developed on-shore, without costly licensing fees. In this presentation we will provide an overview of the RISC-V ISA, and provide some example benchmarks of RISC-V implementations in Microsemi RTG4 radiation tolerant FPGAs.
Bob Patti, KnuEdge An Update on 3D Memory
The high-performance computing needs low latency, high bandwidth and high capacity memories more than ever before. HBM devices have addressed part of this need but a low latency alternative is still needed.  In this talk we will discuss the latest updates on the Tezzaron 3D memories, current challenges, applications and future 2.5/3D plans.
Bob Patti, KnuEdge A New AI Centric Processor and Platform
Many factors conspire to limit significant gains in computing power.  Conventional computing paradigms are failing. The hardware fundamentals of Moore’s law stumble against the hard realities of physics; ever-increasing numbers of cores strain our capability to effectively program them.  New computing paradigms must evolve, and a dominant force in this evolution is Artificial Intelligence (AI).  AI demands a new vision of computer hardware based on advanced packaging, low latency, and true heterogenous computing.
This talk explores the future of AI hardware by focusing on a new highly scalable processor and platform. The KnuEdge Hermosa processor was designed from the ground up with a concentrated emphasis on AI computing.  Its extreme scalability addresses tiny IoT and handheld devices as well as enormous exascale HPC applications.  A key element of the platform is low latency, both in its plentiful memory and in its communications across millions of processing cores. The hardware design incorporates 2.5D and 3D integrated circuits with true heterogenous computing elements, stepping ahead of Moore’s Law into the new era of AI-driven computing.
Patrick Phelan Data Access Architectures for High Throughput, High Capacity Flash Memory Storage Systems
We present a design framework and software tools which support the development of high performance, high capacity data storage hardware systems employing flash memory technology.  This framework facilitates the design of data storage systems which provide multiple terabits of storage and access and retrieval rates of several gigabits per second.  This methodology supports the design of data storage systems with widely varying functional requirements by enabling rapid exploration of the design space, providing automatic validation of functional correctness, and providing accurate quantitative predictions of performance.  We present two case studies demonstrating the flexibility and scope of this approach, and describe progress toward the implementation of a prototype data storage system designed using the framework.
Darrel Raines Toward a Spacecraft Auxiliary Computer Design
Johnson Space Center is working toward the design of a Spacecraft computer that can be added to most spacecraft design as a auxiliary computing device. Such a design must compliment the existing computing devices. It must be reconfigurable for scientific computation or augmenting crew display capabilities. This session will promote discussion of existing design concepts and future direction of that design.
Carolina Restrepo GN&C Technologies for Safe and Precise Landing
GN&C technologies for safe and precise landing have computationally expensive algorithms that require real-time performance during the short duration of Entry, Descent and Landing (EDL).  The algorithms provide capabilities such as terrain relative navigation, guidance maneuvers, hazard detection and avoidance (HDA), and hazard relative navigation.  Multiple NASA projects have developed prototype technologies for these functions.  Two recent NASA projects, ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) and COBALT (Co-Operative Blending of Autonomous Landing Technologies) tested new GN&C sensors and algorithm prototypes for precision landing that leveraged the Tilera-64 and an FPGA for advanced processing.  A spaceflight implementation of these technologies requires a path-to-flight processing capability such as the HPSC (High Performance Space Computing) processor to achieve the real-time performance needs during EDL for robotic and human missions. 

 
John Samson, Morehead Update on the Dependable Multiprocessor DM7 ISS Flight Experiment
The DM7 ISS flight experiment was launched to the ISS in early December on HTV-6.  The NREP (NanoRacks External Platform) Mission 1/Mission 2 switch-over occurred on April 27th, at which time the DM7 experiment was installed on the NREP.  The NREP was re-deployed to the outside of the ISS on April 27th.  The DM7 payload was activated on April 28th.  As soon as power was applied, the DM7 experiment started operating and downlinking telemetry.   In the month DM7 has been on-line, all three DM7 experiment missions:  the Radiation Effects Mission, the Fault Tolerance Capabilities Mission, and the Ground-Commanded Image Compression Mission, were successfully demonstrated.  For the Ground-Commanded Image Compression Mission, 100x and 1000x images were captured and downlinked to the ground.  Twenty-nine (29) snapshot images from the CameraMission100 (100x Image compression) test run on Friday May 5th were merged into a time-lapse “video” montage.   The “video” shows the changing cloud patterns on the earth and the moving shadows on different parts of the ISS structure as the ISS transits in its orbit.  It also shows articulation of the Russian solar arrays. These images are unique because most on-orbit cameras face the earth.  The DM7 camera faces aft.  The DM7 flight experiment is scheduled to last 6 months. The presentation will provide an overview of the DM7 ISS flight experiment, a summary of the results of initial on-orbit functional check-out and operation, and a discussion of DM7 efforts for the remainder of the experiment period.
Harald Schone Are Reliable CubeSats an Oxymoron?
A government consortium met with industry and academia for a two-day workshop with the objective to foster a collaborative public-private dialogue.  The workshop gathered mission assurance approaches, novel test approaches and mitigation techniques associated with SmallSat mission risk classifications while maintaining to the extent practical, cost efficiencies associated with small satellite missions.  These mission risk postures span from “do no harm”, to those whose failure would result in loss or delay of key national objectives. Historically, it was understood and accepted that "high risk” and “CubeSat” were largely synonymous and the workshop has outlined potential approaches to change that perception.
Michel Sika Radiation Mitigation through Arithmetic Codes
The Radiation Mitigation through Arithmetic Codes (RadMAC) effort has been exploring the effectiveness of residual arithmetic codes in providing some measure of radiation mitigation by simulating, fabricating, and analyzing such circuits.  Encouraging results have already been demonstrated in 28nm ASICS and FPGA implementations of the approach.  The project is currently designing more complex circuits in a finfet-based technology node ASIC for further demonstration of the scaling properties of the approach.
Jon Slaughter Everspin MRAM Product and Technology Status 
First-generation MRAM, based on Magnetic Tunnel Junction (MTJ) devices and a field switching innovation called “Savtchenko switching,” is mass produced by Everspin in densities up to 16Mb. This “Toggle” MRAM has been in production for over eight years and is used in applications demanding a unique combination of speed, nonvolatility, and reliability. The second generation of MRAM employs spin-torque switching, which enables smaller MTJ devices for higher density. Everspin has shipped the industry’s first ST-MRAM product, a 90nm-node, 64Mb DDR3 ST-MRAM based on in-plane MTJ devices (iMTJ). MTJ devices with magnetization perpendicular to the film plane (pMTJ) reduces the write energy considerably, and enables drastic size scaling of the MTJ for advanced technology nodes, extending the density of discrete ST-MRAM products to the Gb range and supporting embedded MRAM with tunable performance characteristics. We have recently reported on a fully-functional 256Mb, DDR3 ST-MRAM product made with pMTJ arrays on 40nm-node CMOS wafers and a 1Gb, DDR4, discrete memory product fabricated 28nm process technology under a joint development project with GLOBALFOUNDRIES. Challenges in developing manufacturable ST-MRAM include achieving certain key MTJ device parameters while also controlling bit-to-bit and die-to-die distributions in the memory array and reducing the MTJ device size. The commercial roadmap, scaling, and performance options/tradeoffs will be discussed.
Ian Troxel, Troxel Aerospace Industries The HPSC Chiplet Software Ecosystem
Small satellite platforms are gaining in popularity to address increasingly complex missions due to their cost-effectiveness and ability to rapidly development and field a solution.  However, system capabilities are often limited by size, weight, and power (SWaP) constraints and relatively lower reliability, which tend to reduce overall mission effectiveness.  Unibap is addressing SWaP limitations by fielding AMD-based GPU processors to enhance on-board mission processing capabilities particularly in the areas of Artificial Intelligence (AI) applications honed through industrial robotics automation to increase information extraction and automated decision making.  Twenty four units have been delivered to Satellogic’s Aleph-1 Earth observation constellation, the first two vehicles of which launched on May 30, 2016 and have been operating successfully since.  Troxel Aerospace is developing novel image processing algorithms that are greatly enhanced by Unibap’s space-qualified GPU processors.  Total dose and latch-up susceptibility is improving as VLSI feature sizes reduce but this trend is also increasing the rate of single-event effects, which impact system reliability.  To address this concern, Troxel Aerospace is developing an SEE Mitigation Middleware targeted to many- and multi-core processors, which is being applied to provide increased fault tolerance for Unibap’s and other processor products.  This presentation describes the technology fielded for the Satellogic constellation and plans for future development.
JP Walters, USC/ISI Fault Tolerant GPU Processor Systems for Small Satellite Artificial Intelligence Applications
With an architecture based on the ARMv8 processor, the HPSC Chiplet combines open source software with novel software-based fault tolerance.  The HPSC Chiplet leverages a wealth of open source software, including operating systems, compilers, debuggers, and common libraries.  Its Linux solution is based around the Yocto project, providing developers with a common embedded baseline for the Linux BSP.   In addition, a software-based fault tolerance solution will be developed that will enable application-specific fault tolerance. This will provide programmers with a familiar development environment and a flexible and runtime-adaptable fault tolerance solution. Finally, this talk will cover the two Chiplet emulators being developed as part of the HPSC program, which will provide developers with platforms for early software development.
Chris Wilson In-Flight Experiences with the CSP Hybrid Space Computer
Space is a hazardous environment, with challenging operation conditions caused by radiation effects that can degrade and catastrophically impair electrical, electronic, and electromechanical systems. Challenged by constraining resource limitations (e.g. cost, power, size, weight), space agencies and companies must accomplish more with less, with systems that can operate reliably while meeting demanding mission requirements.
In response to this need, the Department of Defense (DoD) Space Test Program (STP) helps evaluate and advance new technologies for the future of spaceflight.  STP-Houston provides opportunities for organizations to perform on-orbit research and technology demonstrations from the International Space Station (ISS), to verify and provide flight heritage for promising new technologies.
To address performance and reliability requirements for future missions, researchers at the NSF Center for High-Performance Reconfigurable Computing (CHREC) at the University of Pittsburgh have developed a concept for hybrid space computing that features hybrid processor architecture (fixed and reconfigurable logic) merged with innovative and hybrid system design (radiation-hardened and commercial components with fault-tolerant computing strategies). The culmination of this concept was the recent development of the CHREC Space Processor v1 (CSPv1). This presentation describes lessons learned and preliminary results from the CSP experiment onboard the ISS as part of the STP-H5 mission.   
James Yamaguchi  Bringing 3D Memory Cubes to Space
The computing capabilities of onboard spacecraft are a major limiting factor for accomplishing many classes of future missions. Although technology development efforts are underway that will provide improvements to spacecraft CPUs, they do not address the limitations of current onboard memory systems. In addition to CPU upgrades, effective execution of data-intensive operations require high-bandwidth, low-latency memory systems to maximize processor usage and provide rapid access to observational data captured by high-data-rate.  The integration of a COTS 3D stacked device with a Rad Hard by Design (RHBD) controller offering a high-speed serial interface provides a versatile, scalable architecture.  These features coupled with high bandwidth capabilities, increased memory density and EDAC make this memory module a good candidate for future use with NASA’s High Performance Spaceflight Computer (HPSC). These 3D memory modules will help produce a low SWaP answer to their 2D counterparts. The purpose of this research is to continue to advance this 3D technology in order to ultimately incorporate these 3D memory modules into future space missions.  
Hester Yim HPSC HEOMD Applications at JSC
Abstract:  I will be presenting the activities at JSC in particularly in Crew Countermeasure Software (CMS) and Habitat Virtualization Ground Test activities.  CMS project supports Astronaut Strength, Conditioning and Rehabilitation for long term stay in space environment.  Objectives of CMS is to improve efficiency, reduce training time, reduce crew frustrations (improve user satisfaction), provide integrated dashboard (crew view of their exercise progress towards goals), provide virtual trainer assistant (increase countermeasure results with reduced injuries) and address many other countermeasure needs by integrating existing GUIs for crew and implementing it into newer generation systems, by development of VR/AI coaching and real time instructions on board, and by providing a better environment via VR/telepresence technologies.   This is a grant based study to continue for application of future habitat environment.
Also I will be presenting various habitat virtualization activities that are on-going in JSC with collaboration with other NASA centers and academia.  These are including utilizations of VR/AR/MR technologies in ISS and evaluations.

Date of this document: May 29, 2017 (b)
Note: To save as HTML, change file type to html; republish: sheet; push button republish