The onboard software that is managing the data and their fluxes -hereafter the EUI Data Manager (EDM)- is designed in order to find a good trade-off between the potential of the instrument for enormous data rates, and the limited telemetry typical for Solar Orbiter as a deep-space encounter mission.
The basic concept relies on early pre-processing including compression of the images within the common electronics box -CEB- (or even perhaps within the front-end electronics -FEE-, viz. the camera head), and a subsequent filtering and prioritization of the images in the CEB.

Data Flow and Processing

Each FEE converts the analog signals from the APS imaging detectors (by a 14 bit ADC converter) and forward them to the CEB.

In order to reduce the read-out noise, the APS sensors will normally be operated in non-destructive read-out (NDR) mode, i.e. the sensors are read out repetitively (at least twice) during an exposure, and the acquired frames are combined in a final image by the EDM. The NDR allows a series of smart features:
  • Early identification of cosmic rays encounter on specific pixels
  • Upload of updated pixel maps, for the identification of bad pixels (hot, dead, etc).
  • Multiple Adaptive Reset (MAR), i.e. reset of pixels approaching full well, providing both enhanced dynamics and proper temporal resolution on the brighter area.
Ultimately the EDM also compresses the images. The above listed features improve the quality of the final image, minimizing the loss of informations in the compression process (which becomes faster). Furthermore a few simple characterizations of the original image are additionally produced. The output is then a data lump containing subsequently:
  • a header that identifies the image,
  • an ID and parameters such as its time of observation,
  • its telescope and science operation program of origin,
  • moments of the histogram
  • a low-resolution rebinned version of the original image
  • the above image characterization, and finally
  • the compressed image.

Flags for Prioritization

A quality index or priority number that is attributed to each image is used to select the data lump to be temporarely stored onboard. The level of priority will ultimately determine whether an image is earmarked for telemetry. This priority number can be set from the ground as part of the planning. For example to build the synoptic set, the science planner might attribute the highest priority to the corresponding FSI images. Another example might be the planning of a high cadence movie by attributing from the ground a high priority number to each HRI image in a certain time interval.

On board, the EDM will run continuously a number of image processing procedures on the characterization parameters (low-resolution images, moments) attached to the images, as they arrive from the FEE. The scope of this processing is a real time reordering of the priority number of the analyzed image (and/or of other images taken within the same time range) with the following (preliminary) selection criteria:

Flags Method of detection True False
Correct exposure Histogram moments Priority unchanged Priority reduced, flag set to feedback on the commanding
Flare occurence Low resolution image
& Histogram moments
Increase priority of HRIs' images before and after the event
if real-time detection, increase the cadence (TBC)
Priority unchanged
EIT wave occurrence Moments of FSI differenced images Increased priority of HRI images
(if the HRI and FSI FOVs overlap)
Priority unchanged
TBD flags to and from the rest of the payload. XXXXXXXXXXXX XXXXXXXXXXXX XXXXXXXXXXXX

Selection criteria for priority flag assignement.

The CEB needs thus to contain a memory buffer in which the images are stored while the image processing procedures alter their priority number. When the priority of a certain image exceeds a configurable threshold, the image is compacted by stripping all image characterization data, and sent to the spacecraft telemetry buffer. When a new image is acquired, it is put in an empty image slot. If no image slot is available then the new image overwrites the oldest, lowest priority image in the buffer.

Data Compression

Optimizing the science output of EUI requires balanced strategies for target selection, a posteriori image selection, and data compression. We propose a compression with hardware implementation (FPGA). To comply with onboard limitations, and scientific objectives, this FPGA should be able to compress in both lossless and lossy with compression rates ranging typically from 10 to 100, with rates in 30-50 delivering high quality and taken as baseline. It will moreover provide a low-resolution image (64×64 or 128×128) that the CPU can use to flag images for image selection, and extra-information such as histogram characteristics or statistical moments of the image to detect e.g. flares.

Two main FPGA hardware options are forseen:

  1. The most up-to-date is the JPEG2000 compression algorithm implemented in a Virtex-4 FPGA [23]. This hardware implementation requires ~2Gbits extra memory of DRAM that can be reduced by using tiling depending on the needed throughput. The radiation hardened version of Virtex-4 FPGA is being qualified for space applications [94]. The JPEG2000 algorithm has been successfully tested on full sun EIT images 1k×1k, with compression ratio of 50, and still preserving faint EIT-wave events. This implementation provides the wanted low-resolution (thumbnails) images.
  2. The second option is the FLEXWave compression system, based on the local wavelet transform (LWT), implemented in a Xilinx Virtex2000E-08 FPGA [95]. It has the advantage of being already space-qualified, not requiring external memory and consuming less power. It also provides low-resolution images. The drawback is the current implementation that is limited to 1k×1k, currently implying tiling, unless a future development makes it able to process 4k×4k images.


Compression Quality as a function of the compression ratio (middle panel) for different regions on the Sun (left panel) as imaged by the simulated HRI EUV channels. On the right panel the same evaluation is made for the simulated HRI Lyman-α channel.

To evaluate the quality of the compressed image on different regions a normalized index is built, according to the formula Q= [Var(I0)/MSE]1/2. Here I0 is the original uncompressed image and MSE is the mean square error between I0 and the compressed image: the square root of the MSE is the standard deviation of the error, hence Q is an evaluation of the quality of the compression, independently of the imaged region.
Uncompressed TRACE images of 1 May 1998 in the 171 Å bandpass (1 arcsec resolution, mainly quiet sun) and Lyman-α VAULT images of 14- Jun. 2002 (0.33 arcsec resolution) high resolution images are used to evaluate the compression ratio for the HRI channels. The performance in term of the Q factor of the JPEG 2000 compression are plotted in figure 1. In the computation the photon and read noises in each pixel is explicity taken into account. The curves in the middle and right panels can be used to get a first-order extimantion of the compression ratio that achieves a desired quality. However, a dedicated analysis must be performed if the compression have to preserve some special features of the solar activity (e.g. waves, loop oscilation).