NAV Navbar
python cpp c

Introduction

Welcome New Atom Developer!

Our mission at Elementary Robotics is to build applications that can scale to impact our daily lives. The Atom OS is our way of creating a developer platform that makes complex programming simple and accessible to all and to grow a community of shared applications and skills.

Atom works with all major operating systems! Continue reading to learn more about Atom, how it works, and how it can help you reduce your development time and increase your code reusability and portability! Alt Text

Documentation Overview

This documentation contains information about getting up and running with Atom on all levels:

We hope that we've covered the full scope of Atom in these docs but know that more documentation is always better :). If there's something missing or confusing please file an issue on the GitHub repo and/or submit a pull request to fix it! We appreciate all of the feedback and contributions!

SDK Introduction

High-Level

Atom is an SDK that allows for high-throughput messaging, logging, and command handling in a distributed system with language support in nearly every programming language. It allows creators of robots, IoT systems, small server deployments and more to quickly and easily develop reliable messaging systems with minimal effort across a few foundational paradigms.

Atom has been developed with both ease of use and performance in mind. Development and use of the Atom OS depends heavily on Docker. This document will cover Docker at a high level mainly in areas that are applicable to atom users and developers.

On top of Docker, the main technology that atom uses is Redis. Redis is the primary communication backend of the Atom OS and enables our cutting-edge communication paradigms. It also is the main driver of our language support as it's supported in 50+ programming languages. A high-level atom user/dev won't need to know much about Redis, while developers of our language clients will become intimately familiar with its many features.

Goals

Atom was developed out of a desire for an easier, performant paradigm for reusable microservices. When developing the system we were focused on a few main goals:

  1. Create an easy, performant command/response paradigm
  2. Create an easy, performant data publication paradigm
  3. Eliminate dependency issues
  4. Enable use in as many programming languages as possible, on as many platforms as possible
  5. Enable users to develop reusable applications
  6. Make serialization optional by design, but when desired easy and performant

Messaging

With the above goals in mind we first had to choose a messaging protocol and design a specification around it. Our system was originally built on zeromq which, while incredibly performant, would require a significant amount of custom code to implement the easy messaging paradigms we wanted. ROS, on the other hand, has most of the concepts we needed, yet didn't quite meet our performance or ease-of-use requirements. After evaluating the technologies available we settled on using Redis, specifically Redis Streams as our primary messaging protocol.

Redis Streams are essentially in-memory time-series data stores that allow for both blocking and nonblocking interaction with data. The primary advantage they provide over a typical pub-sub socket is that Redis acts as a last value cache, i.e. it stores the most recent N items in a stream. Now, a subscriber can either choose to interact with the stream in traditional pub/sub fashion and get events whenever new data is delivered or they could also choose to poll the stream and request the most recent N events whenever they'd like. This is quite powerful as it eliminates many issues with pub/sub such as the Slow Subscriber and it allows publishers of data to truly decouple from all different sorts of subscribers.

Another advantage of Redis Streams are the consumer groups. With a consumer group, we can set up multiple subscribers on a single data stream where redis will handle distributing the N messages coming in over the stream to the M subscribers such that no two subscribers get the same message. This allows for load balancing, a/b testing and more paradigms with no additional effort.

Finally, Redis is a hardened technology with an active developer base which leads us to believe that building a system atop Redis will enable us to be more consistently stable and also give us the tools to fix the issues we see when they do arrive.

Specification

With the messaging protocol chosen, the core of the Atom OS is a pair of specifications:

  1. Messaging protocol atop Redis
  2. End-user API

The language clients, then, will implement (1) while exposing (2) to the users in a consistent fashion. Most atom developers will mainly be concerned with (2), i.e. "How do we use atom?", while (1) is needed for developers to create and verify new language clients.

The full specification can be found in the Specification section of this documentation.

Language Clients

A language client, then, implements the end-user API in the language of their choice. Users can choose to interact with the atom systems in a span of languages from C to javascript (coming soon!). For each supported language in the system you'll see a tab on the right hand side of this documentation that shows the implementation details for that particular language.

Some languages are more suited to some tasks than others and atom gives the user the flexibility to implement their desired task in their desired language. Elements requiring hardware or linux drivers, for example are typically written in C/C++ for performance, while many ML algorithms like to use Python and web-facing code is often written in javascript.

Elements

The final concept in the Atom OS is that of an element. An element is a fully containerized microservice that utilizes an Atom OS language client to expose some novel functionality to the system. Some examples of elements are:

  1. Realsense Camera Driver
  2. Stream Viewer
  3. Segmentation algorithm
  4. Recording tool

Each element exposes its functionality through two main features of the Atom OS:

  1. Commands
  2. Data Streams

Commands

Commands allow for one element to call functionality present in another element. Some example commands are:

Commands can take an optional data request payload and return an optional response payload.

Commands can be called in either a blocking or nonblocking fashion depending on the caller's preference.

Data Streams

Data streams are published by elements for other elements to consume. For example the realsense element publishes streams of color, depth and pointcloud frames at 30Hz.

Using Redis Streams, the publisher is able to publish completely agnostic to any subscribers and/or their preferred subscription method. The subscribers can then choose to subscribe to all data in an event-driven fashion, poll at their desired frequency for the most recent value, or traverse the stream in large chunks, querying for all data since they last read.

Each piece of data in a stream is called an "Entry".

SDK Concepts

Nucleus

The nucleus is the core of the atom system. It runs a redis server that enables elements to communicate with one another.

Element

An element uses the atom library to provide functionality to other elements. This functionality includes reading/writing data to a stream and implementing a command/response system.

For example, one could implement a robot element that publishes its current state on a stream. This robot element could also contain a set of commands that tell the robot to move to a certain position in space. In addition to the robot element, one could have a corresponding controller element to consume the state of the robot and command it to move accordingly.

Each element is packaged as its own Docker container which sends and receives all of its data using the nucleus. This containerization allows each element to be developed with its own dependencies while simultaneously interacting with other elements with their own dependencies.

Command

Issued by an element to execute some functionality of another element.

Response

Returned by an element to indicate the results of the command to the caller element.

Entry

A timestamped data packet that is published by an element on a stream that can contain multiple fields of data. The atom system is not concerned with the serialization of the data and leaves it as the responsibility of the developers to know how the data was serialized in the element. Our recommendation for serialization is msgpack as it is supported by the major programming languages.

Stream

Data publication and logging system used by atom. A stream keeps track of the previously published entries (up to a user-specified limit) so that elements can ask for an arbitrary number of entries.

SDK Specification and API

This section contains the atom spec. It will cover at a high-level the functions that each language client is expected to implement and then at a low-level how it is implemented in redis.

Element Initialization

#include <atom/redis.h>
#include <atom/element.h>
#include <assert.h>

//
// A note about redis: In the C API we explicitly pass around
//  redis handles. These are automatically managed using a pool
//  the C++ API. The idea of a redis handle is a single connection
//  to the redis server and a single memory pool for redis
//  commands and responses. A handle should only ever be used by
//  one thread at a time. In this example we'll show how to make
//  the redis context. In the other examples we'll pass on including
//  this and trust the user.
//

redisContext *ctx = redis_context_init();
assert(ctx != NULL);

struct element *my_element = element_init(ctx, "my_element");
assert(my_element != NULL);

#include <atomcpp/element.h>

atom::Element my_element("my_element");
from atom import Element

my_element = Element("my_element")

Creates a new element.

API

Parameter Type Description
name string Name for the element

Return Value

Element created (none if a class constructor)

Spec

First, make a response stream named response:$name and a command stream named command:$name by XADDing the following key:value pairs to the streams:

Key Value
language Name of language client
version Version string for language client

Element Cleanup

#include <atom/element.h>

element_cleanup(ctx, my_element);
// Only needed if created with "new"
delete my_element;
del my_element

Called in class destructor or when we're done with an element

API

Parameter Type Description

Return Value

None

Spec

Delete the following redis streams using UNLINK:

Write Entry

#include <atom/element.h>
#include <atom/element_entry_write.h>

// Number of keys we're going to write to the stream
int n_keys = 2;

// First, we need to create the struct that will hold
//  the info for the write
struct element_entry_write_info *info =
    element_entry_write_init(
        ctx,                    // redis context
        my_element,             // Element pointer
        "stream",               // stream name
        n_keys);                // Number of keys for the stream

// Now, for each key in the stream, we want to initialize
//  the key string. This only needs to be done once, when the stream
//  is initialized. The memory for this was taken care of
//  in element_entry_write_init()
info->items[0].key = "hello";
info->items[0].key_len = strlen(info->items[0].key);
info->items[1].key = "world";
info->items[1].key_len = strlen(info->items[1].key);

// Now, we can go ahead and fill in the data. When we do this
//  in a loop, only this part need be repeated
info->items[0].data = some_ptr;
info->items[0].data_len = some_len;
info->items[1].data = some_other_ptr;
info->items[1].data_len = some_other_len;

// Finally we can go ahead and publish
enum atom_error_t err = element_entry_write(
    ctx,                                    // Redis context
    info,                                   // Stream info
    ELEMENT_DATA_WRITE_DEFAULT_TIMESTAMP,   // Timestamp
    ELEMENT_DATA_WRITE_DEFAULT_MAXLEN);     // Max len of stream in redis
#include <atomcpp/element.h>

// Make the entry_data map. entry_data_t is a typedef
// for a std::map<std::string,std::string>
atom::entry_data_t data;

// Fill in some fields and values
data["field_1"] = "value_1";
data["field_2"] = "value_2";

// And publish it
enum atom_error_t err = my_element.entryWrite("my_stream", data);
# The field_data_map is used to populate a entry with any number of fields of data
# The key of the map will allow elements who receive the entry to easily access the relevant field of data
field_data_map = {"my_field": "my_value"}
my_element.entry_write("my_stream", field_data_map, maxlen=512)


# If you would like to publish non-string data types (int, list, dict, etc.), you can serialize the data using the serialize flag
# Just remember to pass the deserialize flag when reading the data!
field_data_map = {"hello": 0, "atom": ["a", "t", "o", "m"]}
my_element.entry_write("my_stream", field_data_map, maxlen=512, serialize=True)

Publish a piece of data to a stream.

API

Parameter Type Description
name string Stream name
data map key:value pairs of data to publish
maxlen int Maximum length of stream. Optional. Default 1024

Return Value

Error code

Spec

XADD stream:$element:$name MAXLEN ~ $maxlen * k1 v1 k2 v2 ...

Note the ~ in the MAXLEN command. This is an important performance feature as it tells redis to keep at least $maxlen entries around but not necessarily exactly that many. Redis will remove entries when performant/convenient.

Note the * as well, it tells redis to auto-generate a stream ID for the entry. By default redis will make this a millisecond-level UNIX timestamp appended with -0 at the end. If multiple entries have the same timestamp, redis will bump the -0 to -1 and so on.

Read N most recent entries

#include <atom/redis.h>
#include <atom/element.h>
#include <atom/element_entry_read.h>

//
// Note: For all "read" APIs, the C language client is entirely
//  zero-copy. As such, it is based exclusively around callbacks
//  and it is up to the user to perform any copies as necessary
//  if desired.
//
//  The read APIs all focus around the
//  struct element_entry_read_info, explained below:
//
//    const char *element;                      -- element name
//    const char *stream;                       -- stream name
//    struct redis_xread_kv_item *kv_items;     -- keys to read
//    size_t n_kv_items;                        -- number of keys
//    void *user_data;                          -- user pointer
//    bool (*response_cb)(                      -- response callback
//        const char *id,
//        const struct redis_xread_kv_item *kv_items,
//        int n_kv_items,
//        void *user_data);
//
enum expected_keys_t {
    EXPECTED_KEY_FOO,
    EXPECTED_KEY_BAR,
    N_EXPECTED_KEYS
};

#define EXPECTED_KEY_FOO_STR "foo"
#define EXPECTED_KEY_BAR_STR "bar"

// Read callback with following args:
//
//  id -- Redis ID of the entry read
//  kv_items -- pointer to same array of items created in the read info
//  n_kv_items -- how many kv items there are
//  user_data -- user pointer
bool callback(
    const char *id,
    const struct redis_xread_kv_item *kv_items,
    int n_kv_items,
    void *user_data)
{
    // Make sure that the keys were found in the data
    if (!kv_items[EXPECTED_KEY_FOO].found ||
        !kv_items[EXPECTED_KEY_BAR].found)
    {
        return false;
    }

    // Do something with the key data. Each item
    //  has a redisReply field which will have a data pointer
    //  and a length.
    char *foo_data = kv_items[EXPECTED_KEY_FOO].reply->str;
    size_t foo_data_len = kv_items[EXPECTED_KEY_FOO].reply->len;

    // Note the success
    return true;
}

// Make the info on the stack
struct element_entry_read_info info;

// Fill in the info
info.element = "element";
info.stream = "stream";
info.kv_items = malloc(
    N_EXPECTED_KEYS * sizeof(struct redis_xread_kv_item));
info.n_kv_items = N_EXPECTED_KEYS;
info.user_data = NULL;
info.response_cb = callback;

// Fill in the expected keys. The API is designed s.t. the user
//  specifies the keys they're looking for and the atom library
//  will fill in if the key is found and if so the data for it.
//  In this way we can be zero-copy above the hiredis API
info.kv_items[EXPECTED_KEY_FOO].key =
    EXPECTED_KEY_FOO_STR;
info.kv_items[EXPECTED_KEY_FOO].key_len =
    sizeof(EXPECTED_KEY_FOO_STR) - 1;
info.kv_items[EXPECTED_KEY_BAR].key =
    EXPECTED_KEY_BAR_STR;
info.kv_items[EXPECTED_KEY_BAR].key_len =
    sizeof(EXPECTED_KEY_BAR_STR) - 1;

// Now we're ready to go ahead and do the read
enum atom_error_t err = element_entry_read_n(
    ctx,
    my_element,
    &info,
    n);
#include <atomcpp/element.h>

// Make the vector of Entry classes that the call ill return
std::vector<atom::Entry> ret;

// Make the vector of keys that we're expecting. This is a bit
//  of a legacy of the underlying C api and will hopefully
//  be removed in the future
std::vector<std::string> expected_keys = { "key1", "key2"};

// Number of entries to read
int n_entries = 1;

// Perform the read
enum atom_error_t err = my_element.entryReadN(
    "element",
    "stream",
    expected_keys,
    n_entries,
    ret);
# This gets the 5 most recent entries from your_stream
entries = my_element.entry_read_n("your_element", "your_stream", 5)

# If the element is publishing serialized entries, they can be deserialized
entries = my_element.entry_read_n("your_element", "your_stream", 5, deserialize=True)

Reads N entries from a stream in a nonblocking fashion. Returns the N most recent entries.

API

Parameter Type Description
element string Element whose stream we want to read
name string Stream name
n int How many entries to read

Return Value

List of entry objects. Each entry should have an "ID" field with the redis ID of the entry as well as a field for the key:value map returned from the read. Objects should be returned with the newest (most recent) at index 0 and then on.

Spec

XREVRANGE stream:$element:$name + - COUNT N

Uses XREVRANGE to get the most recent N items.

Read up to the next N entries

#include <atom/redis.h>
#include <atom/element.h>
#include <atom/element_entry_read.h>

//
// Note: see element_entry_read_n spec for the basics on
//  the read APIs.
//

struct element_entry_read_info info;

// ... Fill in the info ...

// Now we're ready to go ahead and do the read
enum atom_error_t err = element_entry_read_since(
    ctx,
    my_element,
    &info,
    ENTRY_READ_SINCE_BEGIN_BLOCKING_WITH_NEWEST_ID,
    timeout,
    n);
#include <atomcpp/element.h>

// Make the vector of Entry classes that the call ill return
std::vector<atom::Entry> ret;

// Make the vector of keys that we're expecting. This is a bit
//  of a legacy of the underlying C api and will hopefully
//  be removed in the future
std::vector<std::string> expected_keys = { "key1", "key2"};

// Max number of entries to read
int max_entries = 100;

// String keeping track of last ID. If this is the first time
//  we're going to be doing the read, we want to leave this as ""
//  but note that we MUST SPECIFY A BLOCK TIME. After this,
//  we want to keep track of the final ID that was returned to us
//  in the API call and pass that through to the next call.
std::string last_id = "";

// How long to block waiting for any data
int block_ms = 1000;

// Do the read
enum atom_error_t err = element.entryReadSince(
    "element",
    "stream",
    expected_keys,
    max_entries,
    ret,
    last_id,
    block_ms);
# This will get the 10 oldest entries from your_stream since the beginning of time.
entries = my_element.entry_read_since("your_element", "your_stream", last_id="0", n=10)

# If the element is publishing serialized entries, they can be deserialized
entries = my_element.entry_read_since("your_element", "your_stream", last_id="0", n=10, deserialize=True)

Allows user to traverse a stream without missing any data. Reads all entries on the stream (or up to at most N), since the last piece we have read.

If last_id is not passed, this call will return the first new piece of data that's been written after our call.

If block is passed this API will block until new data is available.

This API can be used to traverse the stream in a blocking pub-sub fashion if block is true. Each time the call returns, loop over the list of entries, process them, then pass the final ID back in and wait for more data.

API

Parameter Type Description
element string Element whose stream we want to read
name string Stream name
n int How many entries to read
last_id string Optional. If passed, Redis ID of last entry we read. If not passed we will return the first piece of data that is written to the stream after this call is made.
block bool If true, will block until we can return at least 1 piece of data. If false, can return with no data if none has been written.

Return Value

List of entry objects. Each entry should have an "ID" field with the redis ID of the entry as well as a field for the key:value map returned from the read. Objects should be returned with the oldest at index 0 and then on. Note that this order is the opposite of the order of the "Read N most recent" spec. This is intentional since it lends itself to the usage paradigms.

Spec

XREAD COUNT N (BLOCK $T) STREAMS stream:$element:name $last_id

Use XREAD (with optional block call and some default timeout) since the last ID on the stream. If last_id is not passed, use $ as this is the special symbol for XREAD to let it know to return the next piece of data. Note that if $ is passed, (and therefore the user did not specify last_id) you must use BLOCK in the XREAD.

Future Improvement

This API should move to using consumer groups since this would allow us to not need to specify the last_id and simply specify the consumer and group. This would also automagically allow us to scale/load balance if were already using consumer groups since if another consumer is added to the same group redis will automatically take care of it.

Read entries with callbacks

#include <atom/redis.h>
#include <atom/element.h>
#include <atom/element_entry_read.h>

//
// Note: see element_entry_read_n spec for the basics on
//  the read APIs.
//
#define N_INFOS 3

struct element_entry_read_info infos[N_INFOS];

// ... fill in the infos ...

enum atom_error_t err = element_entry_read_loop(
    ctx,                                // redis context
    my_element,                         // element struct
    infos,                              // array of infos
    N_INFOS,                            // number of infos in the array
    true,                               // boolean to loop forever
    ELEMENT_ENTRY_READ_LOOP_FOREVER);   // timeout

#include <atomcpp/element.h>

// Callback handler for when we get a new piece of data on the
//  stream. Passed a reference to the entry that was read as
//  well as the user data pointer
bool callback(
    Entry &e,
    void *user_data)
{
    ...
}

// Make the ReadMap for handling the commands
atom::ElementReadMap m;

// User data pointer
void *user_data = NULL;

// Add the handler.
m.addHandler(
    "element",              // Element which publishes stream
    "stream",               // stream name
    { "key1", "key2" },     // expected keys
    callback,               // callback function
    user_data);             // user data pointer.

// This function will never return. If passing an integer instead
//  of `ELEMENT_INFINITE_READ_LOOPS` will return after that many
//  piecesof data have been read
enum atom_error_t err = my_element.entryReadLoop(
    m,
    ELEMENT_INFINITE_READ_LOOPS);

# This will print any entries that are published on stream_0 and stream_1
your_stream_0_handler = StreamHandler("your_element_0", "your_stream_0", print)
your_stream_1_handler = StreamHandler("your_element_1", "your_stream_1", print)
my_element.entry_read_loop([your_stream_0_handler, your_stream_1_handler])

# If the element is publishing serialized entries, they can be deserialized
my_element.entry_read_loop([your_stream_0_handler, your_stream_1_handler], deserialize=True)

This API is used to monitor multiple streams with a single thread. The user registers all streams that they're interested in along with the desired callback to use.

API

Parameter Type Description
handlers map Map of (element, stream) keys to handler values. Could also be a list of (element, stream, handler) tuples
n_loops int Optional. Maximum number of loops. If passed, function will return after n_loops XREADS. Note that this doesn't necessarily guarantee that n_loops pieces of data have been read on a given stream, since each XREAD can yield multiple pieces of data on a given stream.
timeout int Optional. Max timeout between calls to XREAD. If 0, will never time out. Otherwise, max number of milliseconds to wait for any data after which we'll return an error. Default 0, i.e. no timeout

Return Value

Error code

Spec

In a loop for up to n_loops iterations (or indefinitely if n_loops indicates):

  1. Use XREAD BLOCK $timeout COUNT Y STREAMS stream:$elem1:$name1 stream:$elem2:$name2 ... id1 id2 ..., where each stream in the handler map corresponds to stream:$elemX:$nameX and idX. Note that you'll need to keep track of the stream IDs internally s.t. with each call we're only getting new data. Note the COUNT Y statement in here: this limits the max number of entries returned in each call and can help if we get backlogged. It's optional and up the language client if this should be added but will safeguard against network and memory bursts.
  2. When the XREAD returns, loop over the data.
  3. For each piece of data, pass the ID and a key, value map to the handler indicated for that stream.

Future Improvement

Again, it's likely better to move this to XREADGROUP so that we don't need to internally track the IDs and we can let redis do that for us.

Add Command

#include <atom/element.h>
#include <atom/element_command_server.h>

//
// Form 1: command_callback allocates response using malloc()
//

// Command callback, taking the following params:
//
//  data -- user data
//  data_len -- user data length
//  response -- response buffer, to be allocated using malloc() by this function. Will be freed by the API.
//  response_len -- length of response buffer
//  error_str -- if allocated, will send the error string to the user
//  user_data -- user pointer
int command_callback(
    uint8_t *data,
    size_t data_len,
    uint8_t **response,
    size_t *response_len,
    char **error_str,
    void *user_data)
{
    // Make the response
    *response = malloc(some_len);
    *response_len = some_len;

    // Note the success
    return 0;
}

// Add the command
element_command_add(
    my_element,             // Element struct
    "command",              // Command string
    command_callback,       // Callback function
    NULL,                   // Cleanup pointer -- none needed
    NULL,                   // User data -- none used here
    1000);                  // timeout

//
// Form 2: Using a custom cleanup function
//

void cleanup_callback(
    void *ptr)
{
    // Do some fancy cleanup with ptr
    fancy_free(ptr);
}

// Command callback, taking the following params:
//
//  data -- user data
//  data_len -- user data length
//  response -- response buffer, will be passed to cleanup_callback
//      since we did some complicated allocation
//  response_len -- length of response buffer
//  error_str -- if allocated, will send the error string to the user
//  user_data -- user pointer
int command_callback(
    uint8_t *data,
    size_t data_len,
    uint8_t **response,
    size_t *response_len,
    char **error_str,
    void *user_data)
{
    // Make the response using some fancy, non-standard
    //  allocation, likely a C++ object using this API.
    *response = facy_alloc(some_len);
    *response_len = some_len;

    // Note the success
    return 0;
}

// Add the command
element_command_add(
    my_element,             // Element struct
    "command",              // Command string
    command_callback,       // Callback function
    cleanup_callback,       // Cleanup function
    NULL,                   // User data -- none used here
    1000);                  // timeout
#include <atomcpp/element.h>

//
// NOTE: All command handling APIs work with the ElementResponse
//          class as this is what they generally return.
//          The ElementResponse has the following APIs:
//

// Sets the data
void setData(
    const uint8_t *d,
    size_t l);
void setData(
    std::string d);

// Sets the error
void setError(
    int e,
    const char *s);
void setError(
    int e,
    std::string s = "");

//
// NOTE: There are three ways to use the addCommand API in C++:
//      1. Calback-based
//      2. Class-based
//      3. Class-based with msgpack serialization/deserialization
//
//  Both are shown in these docs
//

//
// 1. Callback-based addCommand
//

bool command_callback(
    const uint8_t *data,
    size_t data_len,
    ElementResponse *resp,
    void *user_data)
{
    // Set the data in the response
    resp->setData("some_data");

    // Note success/failure of the callback
    return true;
}

// Add the command
enum atom_error_t err = my_element.addCommand(
    "command_name",                 // Command name
    "command description string",   // Description string
    command_callback,               // Callback function
    user_data,                      // User data pointer
    1000);                          //timeout, in milliseconds

//
// 2. Class-based addCommand
//

enum atom_error_t err = my_element.addCommand(
    new CommandUserCallback(
        "command_name",
        "command description string",
        command_callback,
        user_data,
        1000));

//
// 3. Class-based addCommand with msgpack
//

// Define your class that implements the CommandMsgpack template.
//  In this template you'll find the following:
class MsgpackHello : public CommandMsgpack<
    std::string, // Request type
    std::string> // Response type
{
public:
    using CommandMsgpack<std::string, std::string>::CommandMsgpack;

    // Validate the request data. There is a Req *req_data
    //  in the class
    virtual bool validate() {
        if (*req_data != "hello") {
            return false;
        }
        return true;
    }

    // Run the command. Can use both the Req* req_data and
    //  should set Res* res_data.
    virtual bool run() {
        *res_data = "world";
        return true;
    }
};

// Add a class-based command with msgapck. This will test msgpack
//  as well as any memory allocations associated with it
enum atom_error_t err = my_element.addCommand(
    new MsgpackHello(
        "hello_msgpack",
        "example messagepack-based hello, world command",
        1000));
from atom.messages import Response

# Let's add a command to our element that will add 1 to the input data
# Notice that the developer is responsible for converting the data sent from the element
# Also notice that every command must return a Response object
def add_1(data):
    return Response(int(data) + 1)

# Since this command is simple, we can set the timeout fairly low
my_element.command_add("add_1", add_1, timeout=50)

# Alternatively, we could use serialization to keep from having to convert data types.
def add_2(data):
    return Response(data + 2, serialize=True)

# The deserialize flag will allow the data to be deserialized before it is sent to the add_2 function
my_element.command_add("add_2", add_2, timeout=50, deserialize=True)

Adds a command to an element. The element will then "support" this command, allowing other elements to call it.

API

Parameter Type Description
name string Name of the new command
handler function Handler to call when command is called by another element. Handler should take command data as an argument and return some combination of response data and an error code.
timeout int Timeout to pass to callers in ACK packet. When a command is called by another element, the caller gets an ACK with this timeout telling them how long to wait before timing out if they're going to do a blocking wait for the response

Return Value

Error Code

Spec

Adds command info to internal data structure so that we can effectively use it when we get command requests from other elements. Typically best to use a map of some sort internally, but up to the language client to determine how to do it.

Handle Commands

#include <atom/element.h>
#include <atom/element_command_server.h>

enum atom_error_t err = element_command_loop(
    ctx,
    my_element,
    true,
    ELEMENT_COMMAND_LOOP_NO_TIMEOUT);
#include <atomcpp/element.h>

// Loop forever, handling commands. If passing an integer instead
//  of ELEMENT_INFINITE_COMMAND_LOOPS, will return after N commands
//  have been dispatched.
enum atom_error_t err = my_element.commandLoop(
    ELEMENT_INFINITE_COMMAND_LOOPS);
my_element.command_loop()

Puts the current thread into a command-handling loop. The thread will now serve command requests from other elements for commands that have been previously added using the "Add Command" API.

API

Parameter Type Description
n_loops int Optional. Number of loops to handle before returning. If 0, loop indefinitely.
n_threads int Not currently supported. Optional. Specifies the number of command handling threads to spin up

Return Value

Error Code

Spec

In a loop for up to n_loops iterations (or indefinitely if n_loops indicates):

  1. XREAD BLOCK 0 STREAMS command:$self $id. This will do a blocking read on our command stream.
  2. For each command request that we get from the command stream, perform the steps below. Note that when doing the XREAD from the command stream we get a unique command ID (the entry ID) which when coupled with the element's name makes a globally unique command identifier in the system (element, Entry ID).
  3. Check to see if the command is supported. If not, send an error response on their response stream, response:$caller. Otherwise, proceed.
  4. Send an ACK packet back to the caller on their response stream, response:$caller. In the ACK specify the timeout that was given to us when the user called the "Add Command" API. Also specify the entry ID and our element name s.t. the caller knows for which command the ACK is intended.
  5. Process the command, calling the registered callback and passing any data received to it.
  6. Send a response packet back to the caller on their response stream, response:$caller. The response will contain the response data from the callback as well as an error code. It will again also contain our element name as well as the entry ID so that the caller knows for which command this response is intended.

All writes to response streams are done with XADD response:$caller * ... where $caller is the name of the element that called our command. See below for expected key, value pairs in the command sequence.

Note that when we're referencing a "packet" here we're talking about a single entry on either a command or response stream.

Ideally this is moved to use XREADGROUP so that we can multi-thread this by just spinning up multiple copies of the handler thread in the same consumer group with different consumer IDs. This will also allow us to not need to keep track of the IDs on the stream with the XREADs.

All handlers should be written with the idea that this is a multi-thread safe call, i.e. we can handle multiple commands simultaneously.

Command Packet Data

Key Type Required Description
element String yes Name of element calling the command, i.e. the caller
cmd String yes Name of the command to call
data binary/unspecified no Data payload for the command. No serialization/deserialization enforced. All language clients should support reads/writes of raw binary

Acknowledge Packet Data

Key Type Required Description
element String yes Name of element responding to the command, i.e. the responder
cmd_id String yes Redis entry ID from the responder's command stream. Note that the (element, cmd_id) tuple is a global unique command identifier in the system
timeout int yes Millisecond timeout for caller to wait for a response packet

Response Packet Data

Key Type Required Description
element String yes Name of element responding to the command, i.e. the responder
cmd_id String yes Redis entry ID from the responder's command stream. Note that the (element, cmd_id) tuple is a global unique command identifier in the system
cmd String yes Name of the command that was executed. This isn't strictly necessary to identify the command but is useful for debug/logging purposes
err_code int yes Error code for the command
data binary/unspecified no Response data. No serialization/deserialization enforced. All language clients should support reads/writes of raw binary
err_str string no Error string for the command

Send Command

#include <atom/element.h>
#include <atom/element_command_send.h>

// Response callback with parameters:
//
//  response -- data from command element
//  response_len -- length of response from command element
//  user_data -- user pointer passed to element_command_send
bool response_callback(
    const uint8_t *response,
    size_t response_len,
    void *user_data),

uint8_t *data;
size_t data_len;
char *error_str = NULL;

// Send the command
enum atom_error_t err = element_command_send(
    ctx,                    // redis context
    my_element,             // element struct
    "command_element",      // name of element we're sending command to
    "command",              // command we're calling
    data,                   // data to send to the command
    data_len,               // length of data to the command
    true,                   // whether or not to wait for the response
    response_callback,      // callback to call when we get a response
    NULL,                   // user pointer
    &error_str);            // pointer to error string

// Free the error string if we got one
if (error_str) {
    free(error_str);
}
#include <atomcpp/element.h>

//
// NOTE: there are two APIs through which commands can be sent
//      1. Using a data pointer, data length and response reference
//      2. Using a msgpack template
//

//
// 1. Using data and response
//

// Make the response that will be filled in
atom::ElementResponse resp;

// Send the command
enum atom_error_t err = my_element.sendCommand(
    resp,               // Response reference
    "element",          // Element name
    "command",          // Command name
    NULL,               // Command data
    0);                 // Command data length

//
// 2. Using msgpack
//

// Make the response that will be filled in
atom::ElementResponse resp;

std::string req = "hello";
std::string res;
enum atom_error_t err = element.sendCommand<std::string, std::string>(
    resp,               // Response reference
    "element",          // Element name
    "command",          // Command name
    req,                // Request reference
    res);               // Response reference

response = my_element.command_send("your_element", "your_command", data, block=True)

# If serialized data is expected by the command, pass the serialize flag to the function.
# If the response of the command is serialized, it can be deserialized with the deserialize flag.
response = my_element.command_send("your_element", "your_command", data, block=True, serialize=True, deserialize=True)

Sends a command to another element

API

Parameter Type Description
element String Name of element to which we want to send the command
command String Command we want to call
data binary/unspecified Optional data payload for the command
block bool Optional value to wait for the response to the command. If false will only wait for the ACK from the element and not the response packet, else if true will wait for both the ACK and the response

Return Value

Object containing error code and response data. Implementation to be specified by the language client.

Spec

Perform the following steps:

  1. XADD command:$element * ... where ... is the "Command Packet Data" from the "Handle Commands" spec. This sends the command to the element. This XADD will return an entry ID that uniquely identifies it and is based on a global millisecond-level redis timestamp. This ID will be used in a few places and will be referred to as cmd_id.
  2. XREAD BLOCK 1000 STREAMS response:$self $cmd_id. This performs a blocking read on our response stream for the ACK packet. Note that we're reading for all entries since our command packet which works nicely since the entry IDs use a global redis timestamp.
  3. If (2) times out, return an error, i.e. we didn't get an ACK
  4. If (2) returns data, loop over the data and look for a packet with matching element and cmd_id fields.
  5. If (4) didn't find a match, go back to 2, subtracting the current time off of the timeout and updating the ID to be that of the most recent entry we received.
  6. If (4) found a match, this is our ACK. If block is false then we're done, else proceed.
  7. XREAD BLOCK $timeout STREAMS response:$self $ack_id, where $timeout is the timeout specified in the ACK and $ack_id is the entry ID of the ACK which we got on our response stream. This will read all data since the ACK on the response stream.
  8. If (7) times out, return an error, i.e. we didn't get a response.
  9. If (7) returns data, loop over the data and look for a packet with matching element and cmd_id fields.
  10. If (9) didn't find a match, go back to (7), subtracting the current time off of the timeout and updating the ID to that of the most recent entry we received.
  11. If (9) did find a match, this is our response. Process the packet and return the proper data to the user.

Note that if implemented correctly, per the spec, this is a thread-safe process, i.e. multiple threads in the same element can be monitoring and using the response stream without any issue.

Log

#include <atom/atom.h>
#include <syslog.h>

//
// 3 APIs to log:
//  1. Using printf format + args
//  2. using string + length
//  3. Using vfprintf format + variadic args
//

// 1. Using format
enum atom_error_t err = atom_logf(
    ctx
    my_element
    LOG_DEBUG,
    "I %s to log!",
    "love");

// 2. Using msg + len
enum atom_error_t err = atom_log(
    ctx
    my_element
    LOG_DEBUG,
    "some_msg",
    9);

// 3. Using va_list.
enum atom_error_t err = atom_vlogf(
    ctx
    my_element
    LOG_DEBUG,
    "I %s to log!",
    args);
#include <atomcpp/element.h>
#include <syslog.h>

// Log with a log level and using printf-style formatting
my_element.log(LOG_DEBUG, "testing: level %d", LOG_DEBUG);
from atom.messages import LogLevel

my_element.log(LogLevel.INFO, "Hello, world!", stdout=True)

Writes a log message to the global atom log stream.

API

Parameter Type Description
level int Level indicating severity of log. Must conform to syslog level standard.
msg string Log string

Return Value

Error Code

Spec

XADD log * ... where ... are keys and values conforming to the below packet:

Key Type Required Description
element string yes Name of the element sending the log
level int yes syslog level of the log
msg string yes log string
host string yes Hostname of the container/computer running the element, i.e. contents of /etc/hostname. When run in a docker container this will be a unique container ID

Element Discovery

#include <atom/atom.h>

struct atom_list_node *elements = NULL;

enum atom_error_t err = atom_get_all_elements(
    ctx,
    &elements);

if (elements != NULL) {
    struct atom_list_node *iter = elements;
    while (iter != NULL) {
        fprintf(stderr, "Element name: %s", iter->name);
        iter = iter->next;
    }

    atom_list_free(elements);
}
#include <atomcpp/element.h>

std::vector<std::string> elements;

enum atom_error_t err = my_element.getAllElements(elements);
elements = my_element.get_all_elements()

Queries for all elements in the system

API

Parameter Type Description

Return Value

List of all elements in the system

Spec

Use SCAN to traverse all streams starting with response: and all streams starting with command:. Return the intersection of both lists. Do not use KEYS as this is dangerous to do in production systems

Stream Discovery

#include <atom/atom.h>

struct atom_list_node *streams = NULL;

enum atom_error_t err = atom_get_all_data_streams(
    ctx,        // redis context
    "",         // element name. Leave empty for all elements
    &streams);  //

if (streams != NULL) {
    struct atom_list_node *iter = streams;
    while (iter != NULL) {
        fprintf(stderr, "Stream name: %s", iter->name);
        iter = iter->next;
    }

    // Loop over streams
    atom_list_free(streams);
}
#include <atomcpp/element.h>

//
// 1. For all elements
//

// Map of (element, [streams])
std::map<std::string, std::vector<std::string>> stream_map;

// Will fill in the map
enum atom_error_t err = my_element.getAllStreams(stream_map);

//
// 2. For a particular element
//

std::vector<std::string> stream_vec;

enum atom_error_t err = my_element.getAllStreams(
    "element",
    stream_vec);
streams = my_element.get_all_streams(element="your_element")

Queries for all streams in the system

API

Parameter Type Description
element String Optional. Specifies if streams returned should only be for a single element or for all elements

Return Value

List of all streams in the system, either for the element (if specified), or for all elements.

Spec

Use SCAN to traverse all streams starting with a prefix. If element is not specified, use prefix of stream:. Else, use prefix of stream:$element.

Get Element Version

streams = my_element.get_element_version("your_element")

Allows user to query any given element for its atom version and language.

API

Parameter Type Description
element String The element name we want to query about its version

Return Value

Object containing error code and response data. Response data should have version and language fields. Implementation to be specified by the language client.

Spec

Version queries are implemented using existing command/response mechanism, with a custom version command automatically initialized by atom. This is a reserved name, so users should be unable to add custom command handlers with this string value. Version responses use the same response mechanism as any other command, with the data field populated by a dictionary containing 'version' and 'language' fields.

Set Healthcheck

def is_healthy(self):
    # This is an example health-check, which can be used to tell other elements that depend on you
    # whether you are ready to receive commands or not. Any non-zero error code means you are unhealthy.
    return Response(err_code=0, err_str="Everything is good")

my_element.healthcheck_set(is_healthy)

Allows user to optionally set custom healthcheck on an element. By default, any element running its command loop should report as healthy.

API

Parameter Type Description
handler function Handler to call when a healthcheck command is received. Handler function takes no args, and should return a response indicating whether the element is healthy or not using the err_code on the response.

Return Value

None

Spec

Healthchecks are implemented using the existing command/response mechanism, with a custom healthcheck command automatically initialized by atom. This is a reserved name, so users should be unable to add custom command handlers with this string value. Healthcheck responses use the same response mechanism as any other command, a 0 error code means the element is healthy and ready to accept commands, anything else indicates a failure. If the element is unhealthy, the err_str field should be used to indicate the failure reason.

Wait for Elements Healthy

my_element.wait_for_elements_healthy(['your_element'])

Allows user to do a blocking wait until a given list of elements are all reporting healthy.

API

Parameter Type Description
elements list[string] List of element names we want to repeatedly query and wait until they are all healthy
retry_interval float If one or more elements are unhealthy, how long we should wait until retrying the healthchecks

Return Value

None

Spec

Wait for elements healthy should leverage the existing healthcheck command and block until all given elements are reporting healthy. This command should be backwards compatible, so you should do a version check on each given element to make sure it has support for healthchecks. If it does not, you should assume it is healthy, so that this command doesn't block indefinitely.

Error Codes

The atom spec defines a set of error codes to standardize errors throughout the system.

Error Code Description
0 No Error
1 Internal Error, something that happened in the language client
2 Redis Error
3 Didn't get an ACK to a command
4 Didn't get a response to a command
5 Invalid command packet, i.e. not all required key/value pairs were present
6 Unsupported command
7 User callback for command failed
100-999 Reserved for language-client specific errors
1000+ Reserved for user-callback errors

SDK CUDA Support

{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

If your element requires CUDA support for tasks like deep learning, there are a few extra steps to get set up.

  1. Install the appropriate NVIDIA driver and CUDA on your host machine.
  2. Install nvidia-docker2 by following their instructions.
  3. Rather than using elementaryrobotics/atom in your Dockerfile you will need to use elementaryrobotics/atom-CUDA-<CUDA_VERSION>, where <CUDA_VERSION> matches the version of CUDA on your host machine.
  4. Modify /etc/docker/daemon.json to use NVIDIA as the default runtime by adding the line "default-runtime": "nvidia", as in the example.
  5. Run sudo systemctl restart docker.service
  6. Build and start your containers using the docker-compose command as usual. To verify that everything is running, you can start a shell in your element and run nvidia-smi, which should show some output.
  7. Now you can add any dependencies that rely on CUDA to your Dockerfile!

Docker

The Atom OS is built atop docker. Docker gives us many benefits, with the primary ones being:

  1. Ship code + all dependencies in a single package. No install required.
  2. Multi-platform support. Write an element once and it will run on any OS.
  3. Simple element versioning + deployment through dockerhub.
  4. Deployment and monitoring through docker-compose.

This section will cover all of the general concepts of Docker as well as dive into detail in how we use it.

Overview

Docker is a containerization technology. When you create an empty docker container it's similar to creating a brand-new computer with a fresh installation of linux. This is similar to installing a virtual machine, however Docker is much more performant than a virtual machine (VM). Rather than running an entirely virtual second operating system (OS), Docker shares the core of the linux operating system using a feature called kernel namespaces. This allows a single computer to have no issues running hundreds of docker containers while it would struggle to run more than a few VMs at once.

Installation

Test install

$ docker run hello-world

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

The official Docker Site contains good instructions for installing docker on your system. You'll want to install the docker community edition, with the exception of Docker Toolbox being currently recommeded for Windows and Mac users who wish to use Atom with USB-connected hardware such as the realsense camera.

Once docker is installed on your machine you can test the installation by running the command at right and verifying that the printout looks as seen below it.

Test a Container

Launch Container

$ docker run -it ubuntu:18.04 /bin/bash

Check OS Version

root@af7a6eb1b36f:/# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

Try to run Python3

root@af7a6eb1b36f:/# python3
bash: python3: command not found

Now that we have docker installed, we want to go ahead and take it for a spin. We'll launch a container based off of Ubuntu 18.04 and bring up a basic terminal in the container. We can then check the version of Linux we're running with cat /etc/os-release and we see that indeed we are running Ubuntu 18.04!

Now, let's try to run Python3 and play around... and we see that Python3 isn't installed! Ubuntu 18.04 doesn't come with Python3 installed, so we'll go ahead and make our own Docker image that's based off of Ubuntu 18.04 but contains Python3.

Dockerfile

Example Dockerfile

FROM ubuntu:18.04

#
# Install anything needed in the system
#
RUN apt-get update -y
RUN apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y git python3-minimal python3-pip

Build Dockerfile

$ docker build -f Dockerfile -t my_image .

...

Successfully built a406b2ba741b
Successfully tagged my_image:latest

To build a Docker container which supports Python3, we begin with the Dockerfile which specifies which version of linux to use and what to install in the container. Generally this will be your code and any of the dependencies that it requires.

An example Dockerfile can be seen at right and can be downloaded here.

This Dockerfile does the following:

  1. Starts from the base Ubuntu 18.04 image published on Dockerhub
  2. Runs apt-get update to update everything already installed on the system.
  3. Installs a few other things such as git and python

Once we have this dockerfile we can use the command at right to build it into an image named my_image. After a few minutes the command should complete and you should see the success messages at right.

List Images

List all images

$ docker image list

REPOSITORY                                          TAG                                                       IMAGE ID            CREATED             SIZE
my_image                                            latest                                                    a406b2ba741b        2 minutes ago      541MB
...

Once we've built our Dockerfile into an image we can go ahead and list the images that we've built on our system. You should see that we just recently built the my_image image.

Launch Container

Launch Container

$ docker run --name my_container -it my_image /bin/bash
root@5e97fa9b1025:/#

With a built image we can go ahead and launch a container from that image. A container is just an instance of an image. To launch a container we use docker run. The command at right does the following:

  1. Creates a new container from image my_image
  2. Runs a new process, /bin/bash in the new container. This allows you to pull up a command line in the new container.

Once you've run the above command you should see that you're now logged in as root within the container. Each container has a unique ID so that you can address them individually. Here, the container ID is 5e97fa9b1025. We'll address the container by its name, though, my_container, which is much easier than needing to know/remember the container ID.

Run Python Program

Launch Python3

root@5e97fa9b1025:/# python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

Print Hello, World

>>> print ("Hello, World")
Hello, World

Exit Python3 session

>>> exit()
root@5e97fa9b1025:/#

Now that you are in the container, you can run a simple Python3 program using the version of Python3 that we installed. After running the exit() command you will exit the python interpreter but still be in the container which you launched.

Add a file

Write foo to bar.txt

root@5e97fa9b1025:/# echo "foo" > bar.txt

Read back bar.txt

root@5e97fa9b1025:/# cat bar.txt
foo

Files in the container work just like files on your main computer. We'll create this test file bar.txt and put the word foo in it. We'll then read back the contents of the file using the cat command.

List Containers

List all containers

$ docker container list
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
5e97fa9b1025        my_image            "/bin/bash"              4 minutes ago       Up 4 minutes                                 compassionate_hawking

Now, in a new terminal on your host computer (not in the container) you can list the currently running containers. You'll see that there's just the one container, with an ID that matches the one you see on your command prompt that's been created from the my_image image.

Exit Container

Exit container

root@5e97fa9b1025:/# exit
$

List running containers

$ docker container list
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES

List all containers

$ docker container list -a
CONTAINER ID        IMAGE                                                                            COMMAND                   CREATED             STATUS                          PORTS                    NAMES
5e97fa9b1025        my_image                                                                         "/bin/bash"               8 minutes ago       Exited (0) About a minute ago                            compassionate_hawking

In your container, run the exit command to finish your /bin/bash process. You will see that after this is completed you'll be back in the terminal on your host computer. If you run another docker container list command you'll see that there are no active containers. However, if you run the docker container list -a command you'll see that your container is still there, it's just not running. This is an important distinction to make since your container contains all of the modifications that you've made to the base image, my_image.

Container Modifications

Launch new container and check for bar.txt

$ docker run --name my_container_2 -it my_image /bin/bash
root@75566ed7baa3:/# cat bar.txt
cat: bar.txt: No such file or directory

Run same command in existing, original container

$ docker start -i my_container
root@5e97fa9b1025:/# cat bar.txt
foo

If we were to launch a new container from the same original image and check for the bar.txt file, we wouldn't find it. This is because each time we create a new container from the original image it doesn't contain any of the modifications we've made to other containers. Each container is isolated, the changes made in it don't affect the other images.

However, if we were to restart the original container we created, we'd see that our bar.txt file is still there, alive and well!

Execute command in running container

Read bar.txt using docker exec

$ docker exec -it my_container cat bar.txt
foo
$

Enter shell in running container

$ docker exec -it my_container /bin/bash
root@5e97fa9b1025:/#

One final concept that's important to understand about containers is that, as long as they're running, we can be executing as many commands in them as we'd like. This is just like on your host computer where you can run as many processes/applications as you'd like. In order to execute a command in a running container, we use docker exec. In a new terminal window on your host computer, run the command at right, making sure your original container with bar.txt is still up and running from the docker start command in the previous section. This docker exec command will go into the running container, run the cat bar.txt command, and then exit. This type of command is commonly used to enter a shell session in a running container.

Image Tags

Tag current image as V1

$ docker tag my_image my_image:v1

Add neofetch to Dockerfile

FROM ubuntu:18.04

#
# Install anything needed in the system
#
RUN apt-get update -y
RUN apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y git python3-minimal python3-pip
RUN apt-get install -y neofetch

Rebuild Dockerfile

$ docker build -f Dockerfile -t my_image .

...

Successfully built 6c7b69898ec3
Successfully tagged my_image:latest

Tag current image as v2

$ docker tag my_image my_image:v2

Try to run neofetch in v1 container

$ docker run -it my_image:v1 neofetch
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"neofetch\": executable file not found in $PATH": unknown.

Try to run neofetch in v2 container

$ docker run -it my_image:v2 neofetch
            .-/+oossssoo+/-.               root@3c04e7192d8c
        `:+ssssssssssssssssss+:`           -----------------
      -+ssssssssssssssssssyyssss+-         OS: Ubuntu 18.04.1 LTS bionic x86_64
    .ossssssssssssssssssdMMMNysssso.       Host: XPS 15 9570
   /ssssssssssshdmmNNmmyNMMMMhssssss/      Kernel: 4.15.0-43-generic
  +ssssssssshmydMMMMMMMNddddyssssssss+     Uptime: 1 day, 34 mins
 /sssssssshNMMMyhhyyyyhmNMMMNhssssssss/    Packages: 255
.ssssssssdMMMNhsssssssssshNMMMdssssssss.   Shell: bash 4.4.19
+sssshhhyNMMNyssssssssssssyNMMMysssssss+   CPU: Intel i7-8750H (12) @ 4.100GHz
ossyNMMMNyMMhsssssssssssssshmmmhssssssso   Memory: 6389MiB / 31813MiB
ossyNMMMNyMMhsssssssssssssshmmmhsssssssodocker rm -f $(docker ps -aq)
+sssshhhyNMMNyssssssssssssyNMMMysssssss+
.ssssssssdMMMNhsssssssssshNMMMdssssssss.
 /sssssssshNMMMyhhyyyyhdNMMMNhssssssss/
  +sssssssssdmydMMMMMMMMddddyssssssss+
   /ssssssssssshdmNNNNmyNMMMMhssssss/
    .ossssssssssssssssssdMMMNysssso.
      -+sssssssssssssssssyyyssss+-
        `:+ssssssssssssssssss+:`
            .-/+oossssoo+/-.

Tags are how we keep track of different versions of images. Let's say we want to modify our Dockerfile to add the line at the bottom that installs a program called neofetch. We first want to take our existing image and tag it as something more human-legible than its image hash, a406b2ba741b (note: your hash will differ). We can use the docker tag command to do this. This command takes two arguments where the first argument is the existing image hash/tag and the second is the new tag we want to create.

In our previous examples where we used docker run -it my_image /bin/bash, docker assumed to use the tag latest for my_image. This command is identical to running docker run -it my_image:latest /bin/bash

Now, we want to add the neofetch program to the Dockerfile, rebuild, and tag the image as my_image:v2.

Finally, we can try to run the neofetch command in both v1 and v2 containers and see that in the v1 container neofetch isn't installed while in the v2 container it is and it prints out a pretty logo and info about the OS. Note that the v2 command can be re-run without the v2 tag and Docker will assume latest and will still use the v2 image.

Docker Hub

Log into Docker Hub Account

$ docker login

Tag image under Docker Hub Account

$ docker tag my_image:latest $MY_DOCKERHUB_ACCOUNT/my_image:latest

Push image to Docker Hub

$ docker push $MY_DOCKERHUB_ACCOUNT/my_image:latest

Docker also has a cloud service, confusingly called "Docker Hub" in some places and "Docker Cloud" in others. Hopefully one day they'll get their messaging straight, but for all intents and purposes they can be considered to be the same thing.

Docker hub functions very similarly to github in the concept that it has repositories that you can push your built images and tags to so that others can also access them.

First, create an account on Docker Hub if you don't already have one.

Once you've created an account, we can go ahead and push some of our images to it. The first thing we want to do is log in.

Once logged in, we want to re-tag our my_image image to one that lives within our Docker Hub account. We do this by adding $MY_DOCKERHUB_ACCOUNT/ to the beginning of the image name. This tells docker and Docker Hub which user/organization the image belongs to. For example, if we wanted to push this image to the elementaryrobotics Docker Hub organization, we'd tag it as elementaryrobotics/my_image. For the command at right, use your Docker Hub account name.

Once you've re-tagged the image we can go ahead and push it to Docker Hub. Once this completes (it might take a minute or two), you should be able to navigate to the "Repositories" tab in Docker Hub and see that you have a new repository under your account with repository name my_image and a single tag latest. This repository is by default public, so now anyone in the world could run a container from your image using docker run -it $MY_DOCKERHUB_ACCOUNT/my_image! This is pretty cool in that you've now deployed your first container, but you'll likely want to delete it. You can do this in the settings for the repository.

Useful Commands

List all containers

$ docker container list

List all images

$ docker image list

Tag an image

$ docker tag $IMAGE_HASH $IMAGE_NAME:$IMAGE_TAG

Build an image from a Dockerfile named Dockerfile

$ docker build -t $SOME_TAG .

Build and image from an arbitrarily named Dockerfile

$ docker build -t $SOME_TAG -f $DOCKERFILE_NAME .

Remove all unused objects (stopped containers, dangling images, etc.)

$ docker system prune

Launch a container running its default command

$ docker run -it $IMAGE_NAME

Launch a container and override the command

$ docker run -it $IMAGE_NAME $COMMAND

Execute a command in a running container

$ docker exec -it $CONTAINER_NAME $COMMAND

Launch a shell in a running container (note: container must support bash)

$ docker exec -it $CONTAINER_NAME /bin/bash

As you're using docker, there are a fair amount of useful commands. Some of the most commonly used commands are included here. Feel free to add/update this list!

Docker Toolbox

If you're running Mac or Windows and wish to use USB-connected hardware such as the realsense camera, you'll want to use Docker Toolbox instead of Docker CE. This is necessary to use USB-connected peripherals with Atom.

When you install Docker Toolbox, Virtualbox will also be installed on your machine. Docker then works by booting up a basic linux virtual machine (VM) in Virtualbox and executing all of the docker commands in that VM. It's recommended to configure a few things in order to get the best performace.

Hard Disk Size

The default docker machine that's created with Docker Toolbox only has a 20GB drive associated with it. You'll likely want to allocate more space. Note that this allocation won't actually remove this amount of space from your computer immediately, it'll just allow the docker VM to grow to this size before hitting an error. To delete the default docker machine and re-create it with a larger 100GB disk:

docker-machine rm default

docker-machine create -d virtualbox --virtualbox-disk-size "100000" default

RAM

The default docker VM only allocates 1GB of RAM to the machine. It's recommended to give it 8 or 16GB, though it will work well enough on 1.

Line Endings

Windows and Linux use different line endings. We use docker-compose to mount files between your host OS and the docker container, and if those files have CRLF (Windows) line endings when they're mounted into the container we're going to have a bad time. As such, it's recommended to configure your git repo to peg line endings on particular files (such as shell scripts). See some docs here.

Ports

We use docker-compose to map ports between the docker container and your host computer so that you can see web pages/graphics. For most of the documentation you'll see that we use localhost:X or 127.0.0.1:X to access a port. When using Docker Toolbox, instead of localhost you need to use the IP address of the docker VM, 192.168.99.100, instead of localhost. If you go through the Quickstart, you'll notice that the links to view the graphics don't work by default because of this. Simply replace localhost with 192.168.99.100 in the URL bar of your browser and you should be good to go.

Mounting Folders

In order to get folder mounts to work between Windows and the docker machine, you need to set COMPOSE_CONVERT_WINDOWS_PATHS=1 as an environment variable in your shell.

USB Forwarding

In order for USB-connected hardware to work in Virtualbox we'll need to forward the USB device. This can be done pretty easily in Virtualbox. The docker VM must not be running while doing this config. To stop the machine, you can run

docker-machine stop

  1. Download the Virtualbox Extension Pack to enable USB support. You'll need to go to Help->About Virtualbox to check which version you have installed. As of 2/27/19 Docker Toolbox for Windows came with 5.2.8. Then, go to the download page to download the extension pack version that matches your installed version. Download the file.
  2. Add the extension pack by going to File->Preferences->Extensions and then selecting the file you downloaded. You should then see a message letting you know that the extension pack was successfully installed.
  3. Right click on the default VM and then select Settings.
  4. Set up the desired USB controller and add a filter for the devices you want to forward by clicking on the plus button and choosing your desired device.

Docker Compose

With the basics of Docker understood, we can now learn about Docker Compose and how it's used in the Atom OS. Docker Compose is a tool that orchestrates launching and connecting multiple docker containers in a programmatic fashion. This is important because in the Atom OS we try to build elements that are small and reusable, each in its own container. There is a container for redis, one for the camera driver, one for viewing data, etc. We need to be able to easily launch all of the containers simultaneously, note any dependencies, and link them together.

Installing Docker Compose

See the instructions on the Docker site for information on installing docker-compose.

Docker-Compose file

Example docker-compose.yml file

version: "3.2"

services:

  nucleus:
    image: elementaryrobotics/nucleus
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true

  atom:
    image: elementaryrobotics/atom
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
    depends_on:
      - "nucleus"
    command: "tail -f /dev/null"

volumes:
  shared:
    driver_opts:
      type: tmpfs
      device: tmpfs

Overview

The core of Docker Compose is the docker-compose.yml file. This is a file with YAML syntax. It specifies which containers to launch and volumes to create. In the example at right we'll launch two containers: one which contains the "nucleus" of the atom system and one which contains the Atom OS and client libraries. We'll also create a shared volume and mount it in both containers. This is essentially a shared folder between the containers which we use for communication between them.

Reference

The official docker-compose file syntax and reference can be found on the Docker site.

Detail

Within the docker-compose.yml file we're mainly concerned with the services and volumes sections. In the services section we will list each container we want docker compose to launch for us. The first indentation level for a service is the name that we'd like to call it. This field can be whatever you'd like, but for clarity it's recommended to have it match the image name.

Within each named service then there's a few key items:

Keyword Description
image which docker image to launch the container from
volumes Contains information about shared volumes to mount in the container. We'll typically leave this section as in the default
depends_on Notes a dependency. In the atom service we see that we depend on the nucleus. This will then cause docker-compose to wait for the nucleus to launch before launching the atom container
command Overrides the default command for a container. When a container is started it will typically have some default command that launches the necessary processes. If the container doesn't have a default command or you want to override the default you can use this field. Here, the tail -f /dev/null command basically causes the container to just stay up and running so that we can go into it

The volumes section describes any volumes to create and how they can be shared between containers. Again, this will pretty much always be left as it is. This section just creates a shared temporary filesystem that can be used for communication between the containers.

Launching

Launch app

$ docker-compose up -d

List Containers

$ docker container list
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                    NAMES
edddb661e021        elementaryrobotics/atom      "tail -f /dev/null"      45 minutes ago      Up 45 minutes                                docker_walkthrough_atom_1
3f7df592f06e        elementaryrobotics/nucleus   "docker-entrypoint.s…"   About an hour ago   Up About an hour    6379/tcp                 docker_walkthrough_nucleus_1

Download the example docker-compose.yml file here. Move the file into the same directory as the Dockerfile from the Docker section.

Now that we have a docker-compose file, we want to go ahead and launch the configuration that it specifies. We can do this with docker-compose up -d as shown to the right. After doing this, we can go ahead and list the running containers and see that we indeed have two running containers.

Note that the containers' name will be a combination of (1) the current folder name and (2) the service name. In newer versions of docker-compose there's also a hash value added onto the container name.

Pulling up a shell

Enter shell in container

$ docker exec -it atom /bin/bash
root@edddb661e021:#

Now that we have our system up and launched, the most common thing we'll want to do is open up a shell in one of the containers. We'll go into the atom container so that we can test our first atom commands! Note that you'll need to replace docker_walkthrough_atom_1 with your container name from docker container list as it will differ. You should typically be able to tab-complete the name which helps a bit.

Creating an element

Launch python3

root@edddb661e021:# python3
Python 3.6.6 (default, Sep 12 2018, 18:26:19)
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
Type "help", "copyright", "credits" or "license" for more information.

Import Atom

>>> from atom import Element

Create an element

>>> my_element = Element("my_element")

Now that we're in an active atom container we can use the python3 atom API to create an element.

Using the Command-line Interface

Launch Atom CLI

$ docker exec -it atom atom-cli
    ___  __________  __  ___   ____  _____
   /   |/_  __/ __ \/  |/  /  / __ \/ ___/
  / /| | / / / / / / /|_/ /  / / / /\__ \
 / ___ |/ / / /_/ / /  / /  / /_/ /___/ /
/_/  |_/_/  \____/_/  /_/   \____//____/



> list elements
my_element
atom-cli_edddb661e021

Atom also comes with a command-line interface (CLI) that can be useful for testing/debugging. In a new terminal on your host computer (don't exit the terminal that created the element!) we can launch the atom-cli in the atom container. We can then ask it to list all elements that it sees, and lo and behold there's our new element! There's also an element listed for the CLI itself. This lets us know that the system is up and running and working.

Shutting down

Shut Down

$ docker-compose down -t 0 -v

Once we're done with the system that we launched we can go ahead and shut it all down. This will stop and remove all containers and shared volumes. The elements and data that we created will be lost.

Cleaning Up

Cleaning up by deleting all the docker images

$ docker-compose down -t 0 -v --rmi all

To remove all the docker images downloaded by docker-compose up we could pass additional arguments to shut down command. Please follow this step only when you are finished running the demo as it will remove all of the docker containers that were downloaded from your system. This means that the next time you run the demo you will have to re-download all of the containers which might cause it to take slightly longer.

Building

Docker-Compose service built from Dockerfile

  example:
    container_name: example
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
    command: "tail -f /dev/null"

Build everything in Docker-Compose file

$ docker-compose build

...

Successfully built 079202c53510
Successfully tagged docker_walkthrough_example:latest
nucleus uses an image, skipping
atom uses an image, skipping

Launch

$ docker-compose up -d

Test example image

$ docker exec -it docker_walkthrough_example_1 neofetch

            .-/+oossssoo+/-.               root@1e34b88c2ebd
        `:+ssssssssssssssssss+:`           -----------------
      -+ssssssssssssssssssyyssss+-         OS: Ubuntu 18.04.1 LTS bionic x86_64
    .ossssssssssssssssssdMMMNysssso.       Host: XPS 15 9570
   /ssssssssssshdmmNNmmyNMMMMhssssss/      Kernel: 4.15.0-43-generic
  +ssssssssshmydMMMMMMMNddddyssssssss+     Uptime: 1 day, 3 hours, 33 mins
 /sssssssshNMMMyhhyyyyhmNMMMNhssssssss/    Packages: 255
.ssssssssdMMMNhsssssssssshNMMMdssssssss.   Shell: bash 4.4.19
+sssshhhyNMMNyssssssssssssyNMMMysssssss+   CPU: Intel i7-8750H (12) @ 4.100GHz
ossyNMMMNyMMhsssssssssssssshmmmhssssssso   Memory: 7438MiB / 31813MiB
ossyNMMMNyMMhsssssssssssssshmmmhssssssso
+sssshhhyNMMNyssssssssssssyNMMMysssssss+
.ssssssssdMMMNhsssssssssshNMMMdssssssss.
 /sssssssshNMMMyhhyyyyhdNMMMNhssssssss/
  +sssssssssdmydMMMMMMMMddddyssssssss+
   /ssssssssssshdmNNNNmyNMMMMhssssss/
    .ossssssssssssssssssdMMMNysssso.
      -+sssssssssssssssssyyyssss+-
        `:+ssssssssssssssssss+:`
            .-/+oossssoo+/-.

Shut Down

$ docker-compose down -t 0 -v

So far we've only used Docker Compose to launch prebuilt images, but we can also use it to build from a dockerfile. In the services section of your docker-compose file, add the configuration at right.

Then, go ahead and run docker-compose build which tells docker-compose to build all of the images that it needs. It won't build the atom or nucleus since they come from prebuilt images, but it will go and recompile your Dockerfile.

We can then launch the compose configuration, test the example image by running the neofetch command from before, and shut everything down.

Configuration Detail

Basic element (built from Dockerfile) configuration

  $element:
    container_name: $element
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
    depends_on:
      - "nucleus"

Basic element (from image) configuration

  $element:
    container_name: $element
    image: $image
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
    depends_on:
      - "nucleus"

Element with link between current folder and /development in container

  $element:
    container_name: $element
    image: $image
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
      - ".:/development"
    depends_on:
      - "nucleus"

Element requiring USB

  $element:
    container_name: $element
    image: $image
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
    depends_on:
      - "nucleus"
    privileged: true

Element requiring graphics support

  $element:
    container_name: $element
    image: $image
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
    depends_on:
      - "nucleus"
    environment:
      - "GRAPHICS=1"
    ports:
      - 6081:6080

For most elements built using the Atom OS, the default docker-compose.yml configuration (either built from Dockerfile or from an image), shown to the right, will suffice.

A quite useful configuration to add to your docker-compose is in the volumes section where you can set up a link between files on your host machine and a location in the container. This is super-useful for development/debug as you can make code changes locally and then test them in the container where all of the dependencies are installed.

Elements that want to show graphics will do so through two steps:

  1. Set the GRAPHICS environment variable to 1.
  2. Map port 6080 within the container to some external port. Note that in the ports configuration the order is HOST:CONTAINER. The image will run a virtual display and forwards its graphics to localhost:$HOST_PORT which can then be seen in a browser. In the example, we've mapped the VNC on port 6080 in the container to localhost:6081.

Elements that use your computer's USB ports will need to have the privileged flag set.

Atom Walkthrough

Now that we have a basic understanding of atom, docker, and docker-compose we can go ahead and make a simple element!

Here's what we'll cover in this walkthrough.

Project Template

Download the files below and put them into a new folder named atombot on your system. These files are taken from the template from the Atom OS repo.

File Description
Dockerfile Specifies how to build the element into a Docker container. Installs everything the element needs and copies the code.
launch.sh Runs when the element is booted, invokes the proper commands/sequence to get the element up and running.
docker-compose.yml Specifies which elements to launch and how to link them. At the very least needs the nucleus element as well as our element!

Creating a simple element

// Please switch to python tab
// Please switch to python tab
# atombot.py
from atom import Element
from atom.messages import Response
from threading import Thread, Lock
from time import sleep

class AtomBot:

    def __init__(self):
        # This defines atombot's current position
        self.pos = 2
        # We allow 5 different positions that atombot can move to
        self.max_pos = 5
        # An ascii representation of atombot!
        self.atombot = "o"
        # Lock on updates to robot position
        self.pos_lock = Lock()
        # Lock on updates to robot representation
        self.bot_lock = Lock()

    def move_left(self, steps):
        """
        Command for moving AtomBot in left for a number of steps.

        Args:
            steps: Number of steps to move.
        """
        # Note that we are responsible for converting the data type from the sent command
        steps = int(steps)
        if steps < 0 or steps > self.max_pos:
            # If we encounter an error, we can send an error code and error string in the response of the command
            return Response(err_code=1, err_str=f"Steps must be between 0 and {self.max_pos}")

        # Update the position
        try:
            self.pos_lock.acquire()
            self.pos = max(0, self.pos - steps)
        finally:
            self.pos_lock.release()

        # If successful, we simply return a success string
        return Response(data=f"Moved left {steps} steps.", serialize=True)

    def move_right(self, steps):
        """
        Command for moving AtomBot in right for a number of steps.

        Args:
            steps: Number of steps to move.
        """
        # Note that we are responsible for converting the data type from the sent command
        steps = int(steps)
        if steps < 0 or steps > self.max_pos:
            # If we encounter an error, we can send an error code and error string in the response of the command
            return Response(err_code=1, err_str=f"Steps must be between 0 and {self.max_pos}")

        # Update the position
        try:
            self.pos_lock.acquire()
            self.pos = min(self.max_pos, self.pos + steps)
        finally:
            self.pos_lock.release()

        # If successful, we simply return a success string
        return Response(data=f"Moved right {steps} steps.", serialize=True)

    def transform(self, _):
        """
        Command for transforming AtomBot!
        """
        # Notice that we must have a single parameter to a command, even if we aren't using it.

        # Update bot ascii representation
        try:
            self.bot_lock.acquire()
            if self.atombot == "o":
                self.atombot = "O"
            else:
                self.atombot = "o"
        finally:
            self.bot_lock.release()

        return Response(data=f"Transformed to {self.atombot}!", serialize=True)

    def get_pos(self):
        try:
            self.pos_lock.acquire()
            return self.pos
        finally:
            self.pos_lock.release()

    def get_pos_map(self):
        """
        Returns the current position of AtomBot as a visual.
        """
        pos_map = ["-"] * self.max_pos
        cur_pos = self.get_pos()
        try:
            self.bot_lock.acquire()
            pos_map[cur_pos] = self.atombot
            return_str = " ".join(pos_map)
            return return_str
        finally:
            self.bot_lock.release()

    def is_healthy(self):
        # This is an example health-check, which can be used to tell other elements that depend on you
        # whether you are ready to receive commands or not. Any non-zero error code means you are unhealthy.
        return Response(err_code=0, err_str="Everything is good")

if __name__ == "__main__":
    print("Launching...")
    # Create our element and call it "atombot"
    element = Element("atombot")

    # Instantiate our AtomBot class
    atombot = AtomBot()

    # We add a healthcheck to our atombot element.
    # This is optional. If you don't do this, atombot is assumed healthy as soon as its command_loop executes
    element.healthcheck_set(atombot.is_healthy)

    # This registers the relevant AtomBot methods as a command in the atom system
    # We set the timeout so the caller will know how long to wait for the command to execute
    element.command_add("move_left", atombot.move_left, timeout=50, deserialize=True)
    element.command_add("move_right", atombot.move_right, timeout=50, deserialize=True)
    # Transform takes no inputs, so there's nothing to deserialize
    element.command_add("transform", atombot.transform, timeout=50)

    # We create a thread and run the command loop which will constantly check for incoming commands from atom
    # We use a thread so we don't hang on the command_loop function because we will be performing other tasks
    thread = Thread(target=element.command_loop, daemon=True)
    thread.start()

    # This will block until every element in the list reports it is healthy. Useful if you depend on other elements.
    element.wait_for_elements_healthy(['atombot'])

    # Create an infinite loop that publishes the position of atombot to a stream as well as a visual of its position
    while True:
        # We write our position data and the visual of atombot's position to their respective streams
        # The maxlen parameter will determine how many entries atom stores
        # This data is serialized using msgpack
        element.entry_write("pos", {"data": atombot.get_pos()}, maxlen=10, serialize=True)
        element.entry_write("pos_map", {"data": atombot.get_pos_map()}, maxlen=10, serialize=True)
        # We can also choose to write binary data directly without serializing it
        element.entry_write("pos_binary", {"data": atombot.get_pos()}, maxlen=10)

        # Sleep so that we aren't consuming all of our CPU resources
        sleep(0.01)

Download the atombot.py file here. The file is also shown at right for reference. This file implements the AtomBot element using the Python3 language client of the Atom OS. It exposes several commands as well as publishes some data.

Setting up your Dockerfile

# Dockerfile

FROM elementaryrobotics/atom

# Want to copy over the contents of this repo to the code
#   section so that we have the source
ADD . /code

# Here, we'll build and install the code s.t. our launch script,
#   now located at /code/launch.sh, will launch our element/app

#
# TODO: build code
#

# Finally, specify the command we should run when the app is launched
WORKDIR /code
# If you had a requirements file you could uncomment the line below
# RUN pip3 install -r requirements.txt
RUN chmod +x launch.sh
CMD ["/bin/bash", "launch.sh"]

Nothing to do in this case, as the default Dockerfile should work for us. However, if you needed to install any Python dependencies through pip or run a Makefile, Dockerfile would have to be modified.

Modifying your launch script

Launch Script

# launch.sh
#!/bin/bash

python3 atombot.py

Modify the launch script launch.sh by copying the content on the right, to run your atombot executable. Your docker container will run any commands in this script upon launch.

Launching the system with docker-compose

# docker-compose.yml

version: "3.2"

services:

  nucleus:
    container_name: nucleus
    image: elementaryrobotics/nucleus
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
    command: ./launch.sh

  my_element:
    container_name: my_element
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
    depends_on:
      - "nucleus"
    command: tail -f /dev/null

volumes:
  shared:
    driver_opts:
      type: tmpfs
      device: tmpfs

Build element

$ docker-compose build

Start app

$ docker-compose up -d

Rename the my_element service and container_name to atombot, so that our docker container is appropriately named.

Remove the line that contains tail -f /dev/null in the atombot service. This line is useful for development purposes as it keeps a container alive when the launch script is empty. However, since we have already modified the launch script, we no longer need this command.

While in the atombot directory, run the following commands:

Congratulations! Your element is now running.

Interacting and debugging with atom-cli

Print Container Info

$ docker container list

Launch Command-Line Interface (CLI)

$ docker exec -it atombot atom-cli

(CLI) Print help information

> help

(CLI) Turn off Msgpack

> msgpack false

(CLI) Print help for a given option

> help read

(CLI) List all running elements

> list elements

(CLI) List all streams for atombot

> list streams atombot

(CLI) Read atombot pos_map stream

> read atombot pos_map

(CLI) Move atombot to the left

> command atombot move_left 2

(CLI) Read atombot pos_map stream at 1Hz

> read atombot pos_map 1

(CLI) See history of all commands to atombot

> records cmdres atombot

(CLI) See all log messages

> records log

(CLI) Exit CLI

> exit

Shut down app

$ docker-compose down -t 0 -v

Now that our atombot element is running, let's interact with it using atom-cli. atom-cli is a tool used for debugging elements that comes installed in every element's container.

Tutorials

To get a broad view of Atom OS with demos, please follow our tutorials section here - Tutorials. Please make sure that you install docker and docker-compose on your machine before you proceed to the tutorials section.

Element Documentation

instance-segmentation

Build Status

CircleCI

Overview

instance-segmentation is an object agnostic foreground segmentation algorithm. It uses depth and grayscale data to determine if non-background objects are present in the image. If there are, then the algorithm will provide masks and bounding boxes for each object that it detects. This element is based on sd-maskrcnn visit their project page for more information on training or benchmarking a model.

Building

Ensure that you have git-lfs installed, as the model weights are included in this repository.

Before running a build for the first time with docker-compose, initialize the submodules first. git submodule update --init --recursive

Commands

Command Data Response
get_mode None str
set_mode {"both", "depth"} str
segment None serialized dict of rois, scores, and TIF-encoded masks
stream {"true", "false"} str

Streams

To enable the streaming of color_mask, send "true" to the stream command.

Stream Format
color_mask TIF encoded image

docker-compose configuration

instance-segmentation:
  container_name: instance-segmentation
  build:
    context: .
    dockerfile: Dockerfile
  volumes:
    - type: volume
      source: shared
      target: /shared
      volume:
        nocopy: true
  depends_on:
    - "nucleus"
    - "realsense"

realsense

Build Status

CircleCI

Overview

The realsense element obtains data from a realsense device and publishes the color, depth, and pointcloud data on a stream. A static transformation between the camera and world is published on a stream and can be updated by following the transform calculation procedure.

Commands

Command Name Data Response
calculate_transform None None

Streams

Stream Format
color TIF encoded image
depth TIF encoded image
pointcloud TIF encoded image
intrinsics dict of floats
transform dict of floats
accel dict of floats
gyro dict of floats

Installation Instructions

Please clone the repo librealsense on your local machine and follow the below steps to build and apply patched kernel modules for:

./scripts/patch-ubuntu-kernel-4.16.sh

./scripts/patch-realsense-ubuntu-xenial-joule.sh
* Arch-based distributions * You need to install the base-devel package group. * You also need to install the matching linux-headers as well (i.e.: linux-lts-headers for the linux-lts kernel).
* Navigate to the scripts folder cd ./scripts/
* Then run the following script to patch the uvc module: ./patch-arch.sh

* Odroid XU4 with Ubuntu 16.04 4.14 image Based on the custom kernel provided by Hardkernel

./scripts/patch-realsense-ubuntu-odroid.sh
Some additional details on the Odroid installation can also be found in installation_odroid.md

Check the patched modules installation by examining the generated log as well as inspecting the latest entries in kernel log:
sudo dmesg | tail -n 50
The log should indicate that a new uvcvideo driver has been registered. Refer to Troubleshooting in case of errors/warning reports.

Note: If you face any installation error like videobuf2 core is in use that might prevent linux from releasing the drivers, please do it manually by running the following commands. sudo modprobe -r uvcvideo
sudo modprobe -r videobuf2_core and reapply the patch.

docker-compose configuration

To give our container access to the realsense device over USB, we must pass privileged: true

  realsense:
    image: elementaryrobotics/element-realsense
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
    depends_on:
      - "nucleus"
    privileged: true
    environment:
      - "ROTATION=0"

The rotation of the color and depth images can be configured through the ROTATION variable in the environment section, where the value is the rotation of the image in degrees that is a multiple of 90.

Decoding the image streams

The element writes TIF encoded images to the color, depth, and pointcloud streams. They can be decoded by performing the following procedure.

color_data = element.entry_read_n("realsense", "color", 1)
try:
    color_data = color_data[0]["data"]
except IndexError or KeyError:
    raise Exception("Could not get data. Is the realsense element running?")
color_img = cv2.imdecode(np.frombuffer(color_data, dtype=np.uint8), -1)

Data Format

Image Type Size Data Type Unit
color 480x640x3 uint8 Intensity
depth 480x640x1 uint16 Distance in mm
pointcloud 307200x3 float32 Distance in m

Static Transform Calculation

If you would like to use the realsense camera from a static position and convert camera coordinates to world coordinates, you can calculate the transform between the camera space and world space.

First, let's estimate the transform. 1. Place the checkerboard within the field of view of the realsense camera. 2. Send the calculate_transform command to the realsense element.

Now the transform stream will broadcast an xyz translation with a quaternion. You can convert camera coordinates to world coordinates by performing the following.

  1. Convert the quaternion to a rotation matrix
  2. Perform a dot product between the camera coordinates and the rotation matrix
  3. Add the xyz translation to the result in step 2.

Attribution

The realsense element is based on this example from Intel.

Record

Build Status

CircleCI

Overview

This element provides the recording functionality for the atom system. It will listen to a stream and record all entries for either a fixed amount of time or number of entries. The recording can be stored either in the system-wide tmpfs that's mounted in docker-compose or in a persistent folder mapped from your host system again using docker-compose.

The recordings can be retrieved in a number of ways: 1. Access the raw file on disk -- likely the least useful 2. Access them through the API. This element has an API that will return recordings either entirely or by chunk. Recordings are returned as a msgpack'd list of entries. 3. Convert them to CSV. This element provides an API to convert any recording into a CSV and allows for custom processing of the data in doing so 4. View them as a plot. This element will load a recording and can visualize the data in plots through a powerful, flexible API. In the API you can specify how to format/convert the data from the recording into a plottable format so that it should meet all of your needs. Plots are interactive and can be saved as images.

All commands and responses from this element use msgpack serialization and deserialization.

File locations

The record element supports saving files in both temporary and permanent file locations. The temporary location will be in the shared tmpfs mounted between all elements in docker-compose at /shared in the container. The permanent location must be mounted by the user in docker-compose and must be located at /recordings. If the user doesn't mount a folder at /recordings in the container, then only the temporary storage of files will work. See the docker-compose section of these docs for more details

Commands

start: Start Recording

Atom CLI example

> command record start {"name":"example", "t":5, "perm":false, "e":"waveform", "s":"serialized"}
{
  "data": "Started recording example for 5 seconds and storing in /shared",
  "err_code": 0,
  "err_str": ""
}

Request

The start recording command takes a msgpack'd JSON object with the following keys:

Key Required Default Description
name yes Name of the recording. This will create a recording file named name.atomrec
e yes Name of element whose stream we want to record
s yes Name of stream we want to record
t no 10 Duration of the recording, in seconds.
n no Duration of the recording, in entries. If specified, will override the t value specified.
perm no false Whether to store the recording in the permanent or temporary location
Response

On success, returns a msgpack'd string letting the user know that the recording was started and where it was started.

On error, returns one of the error codes below:

Error Description
1 Name not provided
2 Element name not provided
3 Stream not provided
4 Name already in use
5 perm true but /recordings not mounted in system

stop: Stop Recording

Atom CLI example

> command record stop "example"
{
  "data": "Success",
  "err_code": 0,
  "err_str": ""
}
Request

The request for this API is simply a msgpack'd string with the active recording name to stop

Response

On success, returns a msgpack'd string letting the user know that the recording was stopped

On error, returns one of the error codes below:

Error Description
1 Recording not valid. Command must be for a valid, active recording

wait: Wait for recording to finish

Atom CLI example

> command record wait "example"
{
  "data": "Returned after 22.64138627052307 seconds",
  "err_code": 0,
  "err_str": ""
}
Request

The request for this API is simply a msgpack'd string with the active recording which we'd like to wait for completion.

Response

On success, returns a msgpack'd string letting the user know that the recording is now done along with how long we spent waiting for it.

On error, returns one of the error codes below:

Error Description
1 Recording not valid. Command must be for a valid, active recording

list: List all recordings

Atom CLI example

> command record list
{
  "data": [
    "example"
  ],
  "err_code": 0,
  "err_str": ""
}
Request

None.

Response

A msgpack'd list of recording names which are present in the system, both in the temporary and permanent filesystem locations.

get: Get Recording Data

Atom CLI example

> command record get {"name":"example", "msgpack":true, "start": 0, "stop":0}
{
  "data": [
    [
      "1553901473204-0",
      {
        "tan": -1.1766610378013764,
        "sin": 0.7619910470709031,
        "cos": -0.647587557156396
      }
    ]
  ],
  "err_code": 0,
  "err_str": ""
}

Request

The get recording request takes a msgpack'd JSON object with the following fields:

Key Required Default Description
name yes Name of the recording. This will create a recording file named name.atomrec
msgpack no false Whether or not to use msgpack to unpack entry values before returning them. Consult the documentation of the stream producing the values to determine if this is necessary.
start no 0 Start entry index. The get request will return all entries in the range [start, stop], inclusive
stop no -1 End entry index. The get request will return all entries in the range [start, stop], inclusive
Response

A msgpack'd list of entries. Each entry is a tuple with the following values

Index Description
0 Redis ID of the entry in the stream
1 key:value map of data from the stream for the entry

On error, returns one of the error codes below:

Error Description
1 Name not provided
2 Failed to open recording file
3 Recording doesn't exist

plot: Plot recording data

Plots sin(x) from example recording

> command record plot { "name":"example", "msgpack":true, "plots":[ { "data": [[ "x", ["sin"], "value" ]] } ] }

Plots sin(x) from example recording, with title and access labels. Removes the legend.

> command record plot { "name":"example", "msgpack":true, "plots":[ { "data": [[ "x", ["sin"], "value" ]], "title": "Sin(x)", "x_label": "time (ms)", "y_label": "sin(x)", "legend": false } ] }

Plots sin(x) and cos(x) from example recording on a single plot

> command record plot { "name":"example", "msgpack":true, "plots":[ { "data": [[ "x", ["sin", "cos"], "value" ]] } ] }

Plots sin(x), cos(x) and tan(x) from example recording on a single plot. Bounds tan(x) using a python lambda between -10 and 10

> command record plot {"name":"example", "msgpack":true, "plots":[ { "data": [[ "x", ["sin", "cos"],  "value" ], ["max(-10, min(x, 10))", ["tan"], "value"]] } ] }

Plots sin(x), cos(x) and tan(x) from example recording on multiple plots. Bounds tan(x) using a python lambda between -10 and 10

> command record plot { "name": "example", "msgpack":true, "plots":[ { "data": [[ "x", ["sin"],  "value" ]] }, { "data": [[ "x", ["cos"],  "value" ]] }, { "data": [[ "max(-10, min(x, 10))", ["tan"],  "value" ]] } ] }

Plots sin(x), cos(x) and tan(x) from example recording on a single plot. Bounds tan(x) using a python lambda between -10 and 10. Saves the plot as a png and doesn't show it to the user.

> command record plot {"name":"example", "msgpack":true, "show": false, "save": true, "perm" : true,  "plots":[ { "data": [[ "x", ["sin", "cos"],  "value" ], ["max(-10, min(x, 10))", ["tan"], "value"]] } ] }
Request

The plot recording request takes a msgpack'd JSON object with the following fields:

Key Required Default Description
name yes Name of the recording. This will create a recording file named name.atomrec
plots yes List of plots to make, where each item in the list is a plot object (see below)
msgpack no false Whether or not to use msgpack to unpack entry values before returning them. Consult the documentation of the stream producing the values to determine if this is necessary.
start no 0 Start entry index. The plot request will plot all entries in the range [start, stop], inclusive
stop no -1 End entry index. The plot request will plot all entries in the range [start, stop], inclusive
show no true If true, will show each plot and allow the user to interact with them. The API call won't return until all plots are closed
save no false If true, will save a .png of each plot
perm no false If true, store plots in permanent filesystem location, else in temporary filesystem location.
x no redis timestamp A string intended to be the pythonic completion of lambda entry: which will be passed the entry key:value map for each entry in the recording and is expected to return an x-value for the entry to be plotted against. This allows us to use something other than the redis timestamp for plotting x-values which is particularly useful when your data packets contain their own timestamps which are more accurate than the one auto-generated by redis
plot object

The core of the plot request is the list of plot objects specified in the plots key. Each plot object is in itself a msgpack'd JSON object with the following fields:

Key Required Default Description
data yes A list of tuples with either 2 or 3 values describing which keys to plot and how to interpret their data
title no something reasonable Title to use for the plot
x_label no something reasonable X label to use for the plot
y_label no something reasobable Y label to use for the plot
legend no true If true, will show the legend on the plot, else will not

The data object, as mentioned above, is a list of 2 or 3 valued tuples describing the data lines to be put on the plot. Its contents are as follows:

Index Required Description
0 yes A string intended to be the pythonic completion of lambda x: which, for each key in the keys list for this data entry (index 1), will be passed the data from the key
1 yes A list of keys to be used. For each entry in the recording, for each key in this list, the lambda from index 0 will be applied on the data to create the data point to be plotted
2 no Optional label to be used for the data in the plot. If not passed a reasonable default will be generated

Putting this all together, an example plots object for data recorded from the waveform serialized stream could look like:

"plots": [
    {
        "data": [
            ["x", ["sin", "cos"], "value"],
        ],
        "title": "Some Title",
        "y_label": "Some Y Label",
        "x_label": "Some X Label",
        "legend": true,
    },
    {
        "data": [
            ["max(-10, min(x, 10))", ["tan"], "value"],
        ],
        ...
    }
]

With this object we'll be generating two separate plots.

On the first plot we'll have two lines, one for sin and one for cos where we'll be graphing the raw data, x, from each entry. If we look at the waveform's serialized stream documentation we see that the serialized stream produces 3 keys: sin, cos and tan.

On the second plot we'll have just one line, the tan(x) value, though we run the data through a python lambda function to bound it in [-10, 10].

Lambdas

Lambdas in Python are simple one-line functions. See the Python docs for more detail.

Response

A msgpack'd string with the success of the plotting function

On error, returns one of the error codes below:

Error Description
1 Name not provided
2 Failed to open recording file
3 Recording doesn't exist
4 Recoding has 0 entries
5 plots not provided
6 Unable to process lambda for x values. x was specified, but the string provided wasn't able to be combined with lambda entry: to create a valid lambda
7 A plot object doesn't have a data field
8 A tuple from the data list of a plot object is the wrong length. Must be 2 or 3 values in size
9 A lambda from a tuple in a data list wasn't able to be combined with lambda x: to create a valid lambda
10 A key from the key list of a tuple in a data list doesn't exist in the recording

csv: Convert recording to CSV file

Save CSV in permanent location using msgpack on values

> command record csv {"name":"example", "msgpack":true, "perm":true}

Add description to filename

> command record csv {"name":"example", "msgpack":true, "perm":true, "desc":"test"}

Add lambda for column 0

> command record csv {"name":"example", "msgpack":true, "perm":true, "desc": "asin", "x":"__import__(\"math\").asin(entry[\"sin\"])"}

Multiply scale by 10x on all data

> command record csv {"name":"example", "msgpack":true, "perm":true, "desc": "multiplied", "lambdas": {"sin":"x * 10", "cos": "x * 5"}}
Request

The csv request takes a recording name and will create one csv file per key in the recording according to the passed parameters.

Key Required Default Description
name yes Name of the recording. This will create a recording file named name.atomrec
msgpack no false Whether or not to use msgpack to unpack entry values before returning them. Consult the documentation of the stream producing the values to determine if this is necessary.
lambdas no Multi-typed, can be string or dictionary. If dictionary, key:lambda values to convert entry data into an iterable object that can then be written to the CSV. If a lambda is not specified for a key, will try to iterate over entry[key] and write values to columns. Intended to be the pythonic completion of lambda x: . If string, same as above except same lambda is applied to all keys
x no redis timestamp A string intended to be the pythonic completion of lambda entry: which will be passed the entry key:value map for each entry in the recording and is expected to return an x-value for the entry for column 0 of the CSV. This allows us to use something other than the redis timestamp forcolumn 0 which is particularly useful when your data packets contain their own timestamps which are more accurate than the one auto-generated by redis
desc no Optional string. If specified, will tack this string onto the filename of the .csv files generated so that they're not overwritten
perm no false If true, store csv in permanent filesystem location, else in temporary filesystem location.
start no 0 Start entry index. The csv request will process all entries in the range [start, stop], inclusive
stop no -1 End entry index. The csv request will process all entries in the range [start, stop], inclusive
Response

A msgpack'd string indicating the success of the request

On error, returns one of the error codes below:

Error Description
1 Name not provided
2 Failed to open recording file
3 Recording doesn't exist
4 Failed to open output CSV file
5 Unable to process lambda for x values. x was specified, but the string provided wasn't able to be combined with lambda entry: to create a valid lambda
6 Unable to process lambda for a key. A lambda was specified, but the string provided wasn't able to be combined with lambda x: to create a valid lambda
7 lambdas argument is not a string or dictionary

docker-compose configuration

  record:
    image: elementaryrobotics/element-record
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
      - ".:/recordings"
    depends_on:
      - "nucleus"
    environment:
      - "GRAPHICS=1"

A pretty standard docker-compose configuration, noting that we can specify to use the in-container graphics through the GRAPHICS=1 setting. The main thing to be sure to do in here is to map some directory on your host computer to /recordings in the container! This is where the permanent files are stored, and if this isn't done then nothing will be able to be saved permanently. All of the temporary filesystem commands will still work.

Launch Options

stream-viewer

Build Status

CircleCI

Overview

The stream-viewer element is a GUI tool used for viewing image data that is written to a stream. It works with the realsense element's color and depth streams and is a useful tool for testing and debugging computer vision algorithms.

Inspect screenshot

docker-compose configuration

In order to save images from stream-viewer, you must mount your local ~/Pictures directory to /Pictures. This element also requires special flags to enable display forwarding. stream-viewer: image:elementaryrobotics/element-stream-viewer volumes: - type: volume source: shared target: /shared volume: nocopy: true - "~/Pictures:/Pictures" - "/tmp/.X11-unix:/tmp/.X11-unix:rw" environment: - "DISPLAY" - "QT_X11_NO_MITSHM=1" depends_on: - "nucleus"

Usage

Since this element utilizes a GUI, we need to forward the display between Docker and the host machine. This command will allow the root user in the container to have access to the X Server. Run this command on the host machine.

xhost +SI:localuser:root

Then start the element following the usual steps.

Usage with realsense element

Start this element in conjunction with the realsense element.

Usage with your own image streams

Currently, stream-viewer will list all available streams in the atom system, but can only view streams with data of a specific format. Specifically, this element expects a tif encoded image written to a stream with the key to the image as data This can be done in Python as follows

_, tif_img = cv2.imencode(".tif", img)
element.entry_write("img", {"data": tif_img.tobytes()}, maxlen=30)

Where img is an OpenCV image.

Voice

Build Status

CircleCI

Overview

The voice element will listen to audio from the microphone and will publish all strings on a stream named strings. For now the voice element only runs on Linux systems, but we should be able to get it up and running on Mac/Windows without too much effort.

Currently uses Google Voice as a backend. All voice commands should be prepended with "OK Google".

Credentials

Generating Credentials

You'll need to hook your own Google voice credentials into the system in order for the voice module to work. To do this, perform the following steps:

  1. Go to the Actions Console and make a new project
  2. Once at the screen with a bunch of smart home card options, click on "device registration"
  3. Click on "register model" in the middle of the resulting page. Follow the prompts and fill the details for product name, device type, etc.

  4. Click on "Download OAuth 2.0 credentials" and rename the downloaded file to "my_client_secret.json"

  5. Put the the my_client_secret.json file in a new, empty folder on your machine. We'll need to use it to generate credentials for the google assistant SDK to work. All of the packages to run the voice element are installed in the docker container, so we'll mount the folder with the secret into the docker container and generate the new credentials into that folder as well.

  6. Modify the docker-compose file to add this line in the volumes section: - "/path/to/secret/folder:/credentials"

  7. Modify the docker-compose.yml file to just boot the container and let us pull a shell up in it command: tail -f /dev/null

  8. Now, run docker-compose up -d and then docker exec -it voice bash to pull up a shell in the voice container. Run the following commands: export LC_ALL=C.UTF-8 export LANG=C.UTF-8 google-oauthlib-tool --scope https://www.googleapis.com/auth/assistant-sdk-prototype --scope https://www.googleapis.com/auth/gcm --save --headless --client-secrets /credentials/my_client_secret.json

  9. This should generate a link that you can open in a browser to authenticate and generate a credential file. If you see an error about not having your Oauth registration page set up yet, see this document, it's something you just need to enable somewhere deep in the caverns of the google cloud console.

  10. Once through the authentication screen, you should get a code that we can then put back into oauthlib tool. If all goes well, the tool will spit out something like: credentials saved: /root/.config/google-oauthlib-tool/credentials.json

  11. Finally, we just need to move the credentials out of the container and onto our host machine s.t. we never have to go through the most convoluted process to generate credentials ever again. Simply move them from where the tool put them into /credentials mv /root/.config/google-oauthlib-tool/credentials.json /credentials/

  12. Voila! We should now have credentials on your host machine. Reset the docker-compose file to the example configuration below

  13. If you hear the google assitant saying "something went wrong", you might still need to enable the google assistant API for your project. You should be able to do this from the google cloud console.

Passing Credentials to the Docker Container

  1. Ensure you have the credentials.json file from the previous step in a safe place on your machine.
  2. Link the credentials.json into the voice container at runtime using the volumes section as seen below. DO NOT CHECK THIS INTO THE CODE.
  3. Optionally, set the DEVCICE_MODEL_ID parameter in the environment section of the docker-compose s.t. you can trace which google voice API calls are coming from which device.

Commands

None

Streams

Stream Format Description
string string Published each time the voice API detects the hotword and processes speech-to-text

docker-compose configuration

  voice:
    container_name: voice
    image: elementaryrobotics/element-voice
    volumes:
      - type: volume
        source: shared
        target: /shared
        volume:
          nocopy: true
      - "~/google_voice/credentials.json:/code/google/credentials.json"
    depends_on:
      - "nucleus"
    environment:
      - "DEVICE_MODEL_ID=SOME_DEVICE_MODEL_ID"
    privileged: true

Note that we need the privileged: true setting in order to use the computer's built-in mic hardware.

Acknowledgements

Developers

Atom was architected and developed for Elementary Robotics by @dpipemazo and @cheripai. Further contributions have been and continue to be made by the below list of fantastic developers:

Open Source

Atom utilizes and builds atop open-source software to create the easiest environment to program a robot and share reusable robotic skills. Please see below for a list of notable open-source software used and click the links to find their respective licenses.