HORISON functionality 

Build visionary imagery solutions at incredible speed

+31 (0)50 211 0662

Call for any questions

HORISON functionality

Build pipelines graphically

HORISON applications can be designed graphically using the Horus Linking Lab by dragging items knows as components onto a canvas, setting properties of those components and connecting them.

The basic building blocks of Horus Framework applications are components. Apart from a number of basic components that are built-in, most components are provided as plugins. Components can accept, transform, and/or produce data.  Each component has a set of configurable properties that allows them to be more flexible.

 

On the left is a list of available components, in the middle is the canvas with the design of the application and on the right are the properties of the selected component.

Start immediately with the full component library

The included component library can be divided into three different main groups.

The first group are data grabbers. These components can output data from camera’s, GPS and IMU devices, and the network, to name a few. These usually only have an output pipe.

The second group are components that process data. They get data from other sources, then process it, before outputting the processed data. These type of components may convert, en/decode and en/decrypt data, to name a couple of examples.

The third group of components are data writers and streamers. They get information from other sources and before sending this data to either the network or a hardware device. These are characterized by the lack of any output pipes. In the Component Browser you can find the full list of installed components and read about their specific functionality. This allows them to be used in concert to accomplish complex tasks.

Standard functionality & applications

Grabbing sensor data

Developers have to support data grabbing processes of multiple sensors.  Each sensor has its own method of grabbing data. Be it with their own SDK or standardized interface. So the software framework needs to support the control and data interface of each of these sensors.

Encrypt data

Some organizations need to prove that collected data is in its original format and require encrypted data.

Multiplex & Synchronize data streams

Combining these datastreams requires synchronization and multiplexing functionality. Each sensor device might have its own frequency (a GPS at 1 Hz and a camera at 10 Hz) If you want to work with the combined data, those data streams need to be synchronized or multiplexed into one signal.

Recognize & anonymize imagery data (GDPR compliant)

Depending on the use case it might happen that images need to be anonymized. For certain use cases organizations need to be GDPR compliant. They apply face and number plate blurring during postprocessing, before it will be used in applications. Sometimes there is a need to anonymize the images (near) real-time so that unblurred imagery never leaves the recording vehicle.

Acquire, project and store imagery data

Imagery is always an important part of the use cases. It helps humans and computers to interpret (qualify) a situation faster. Some use cases need high resolution images; like for mobile mapping, every 3.5 meter a high resolution image is needed. Other uses cases like safety and security require higher framerates (video) to get insight into moving behavior of people or objects.

Annotate & trigger and create actionable data

A use case may require devices to perform actions based on certain analysis. For example if a thermal infrared sensor detects overheating it could trigger an alarm via an I/O connection. may require devices to perform actions based on certain analysis.For example if a thermal infrared sensor detects overheating it could trigger an alarm via an I/O connection.

Decode & Encode video data

Before video data is ready to be used, it often needs to be decoded or encoded. It might need to be converted to another format. Sometimes data needs to be interpreted before it can be used (LiDAR) or compressed to stream it over a network connection.

Integration of third party libraries

These multi sensor solutions often need to be integrated with other information systems or with other multi sensor devices. Therefore the integration with third party frameworks or with proprietary code is needed to include  in a bigger ecosystem. For example frameworks like Gstreamer, Nvidia Deepstream or Darknet.

Multiple camera stitching

Immersive imagery (or 360 degrees imagery) provides an even better contextual insight than a single camera solution, since objects can be seen from different viewing angles or in relation to other nearby objects.  To merge multiple cameras in an immersive view the images need to be stitched.

GUI configuring

Multi sensor solutions sometimes require a graphical user interface. Configuring interfaces to display all the data and system information is often needed.

Measuring & Localizing

A lot of use cases profit from those immersive images when the imagery is project in a spherical or panoramic image projection. Especially when these images are combined with positional sensor results, functionality like measuring and localizing becomes available.

Platform independent deployment

Once a multi sensor software application has been built, it needs to be deployed on a device. Does it need to run on a regular PC, do you want to deploy this on a small low energy embedded board or do you need graphical computing power?

Horus View and Explore B.V.

info@horus.nu
sales@horus.nu
support@horus.nu

Horus View and Explore B.V.
Schweitzerlaan 12 (Entrance Q1)
9728 NP Groningen
The Netherlands

+31(0)50 211 0662
Chamber of Commerce: 67142656
VAT number: NL 856847707B.01

Request for information

  • Hidden
  • This field is for validation purposes and should be left unchanged.