Website | Code

libobscura lets you use your camera

libobscura

libobscura is a set of camera-related libraries, handling the difficult parts of using a Linux camera for you.

Minimal example

Get frames in 8 API calls, access their data in 2 more.

let cameras_list = vidi::actors::camera_list::spawn()?; let cameras = cameras_list.cameras(); let camera = cameras_list.create(&cameras[0].info.id) .expect("No such camera") .expect("Failed to create camera"); let mut camera = camera.acquire(); if let Ok(ref mut camera) = camera { let configs = camera.get_supported_configs()?; let config = configs.query_configs(ConfigRequest { width: Some(640), height: Some(480), ..ConfigRequest::default() }).iter().next().expect("No supported configs"); let mut stream = camera.start(config, 4).unwrap(); loop { let (buf, meta, _next) = stream.next().unwrap(); let mmap = buf.memory_map_ro().unwrap(); let data = mmap.as_slice(); } }

Demo

cargo run --bin glium_2

Remember to plug in your camera!

Usage

Add libobscura registry to .cargo/config.toml:

[registries.libobscura] index = "sparse+https://codeberg.org/api/packages/libobscura/cargo/"

and in Cargo.toml:

vidi = { path = "../vidi", version = "0.3", registry = "libobscura" }

Then copy from the examples.

Devices

Try out your USB camera:

cargo run --bin demo

The Librem 5 has support for collecting frames (demo not adapted yet):

LIBOBSCURA_DEVICES_DIR=crates/vidi/config/ cargo run --bin obscura_getframes -- 'imx7-csi:imx-media:776794edba9cf34e:s5k3l6xx 3-002d'

Aspirations

  • It's hard to use it wrong. No segfaults. Errors guide you to the right track.
  • Point-and-shoot. If that's all you need, you get a RGB buffer in ten lines of code.
  • It's easy to add support for new devices. Great documentation and a good internal API are the goals.
  • It's easy to contribute to. Send patches using the web interface, not a mailing list.

A baby placing a missing block. They are stacked in the Bayer pattern.

Figure: Libobscura will never be friendly enough for every audience.

Why should I care?

If you're an application developer, libobscura is for you! It gives you a reasonable amount of control while preventing mistakes, and it frees you from the trouble of implementing image processing yourself.

If you're a hardware manufactuer, libobscura is also for you! The goal is to make adding new devices really simple as soon as the kernel driver is done.

The libraries

Cameras under Linux require a lot of moving pieces, so libobscura is a collection of many libraries. Some of them were created ust for this purpose, some of them were forked, some were revived.

Hardware support:

General purpose:

The code is stored in the libobscura organization on Codeberg, for most crates inside the libobscura repository.

Dependencies

Libobscura uses as few external dependencies as possible, but not any fewer than that. Rather than reimplement the world, libobscura tries to reduce scope to reach that place. Bigger dependencies like OpenGL libraries are optional, so that the user can decide if they are desired.

Special thanks to the logru project for accepting contributions and for being a backbone of libvidi!

Why is libobscura not on crates.io?

Libobscura is published on its own registry, which requires a small modification to Cargo config (.cargo/config.toml):

[registries.libobscura] index = "sparse+https://codeberg.org/api/packages/libobscura/cargo/"

I [Dorota] will publish on crates.io once it stops requiring a Github login.

This makes a commercial entity the gatekeeper of a large chunk of the Rust community, which is a political bug. I find it simply insane to host Free Software packages but choose a gatekeeper as hostile to Free Software as Microsoft and this is my protest.

I have a small bounty set aside to anyone who fixes that bug. If you want to, please contact me.

License

Libobscura adheres to the REUSE specification to provide information about copyright and licensing information for each file.

vidi-examples are distributed under the terms of the MIT or Apache 2.0 licenses, at your choice. See vidi-examples Those licenses let you copy-paste the code into your application.

[dma-boom is distributed under the terms of MIT.

The rest of libobscura is distributed under the terms of the LGPL2.1 or later, at your choice. See COPYING.md.

  • If you distribute a modified version of those components, you must share your modifications. (The licenses, MPL 2.0 or LGPL 2.1 or LGPL 3.0, require this).
  • If you distribute those components in a Rust project, even as a dependency, in practice your must also include sources to your software and other dependencies (check LGPL 2.1 or LGPL 3.0 for details).

Funding

Many thanks to Prototype Fund and the German Federal Ministry of Education and Research for paying Dorota to take on this crazy project.

BMBF

Community

A tool is only as good as it is useful. So please reach out to us with feedback and improvements.

Libobscura can't survive with one person behind it, so please contribute. Even if to say how you're using it.

Contact

Libobscura has a Matrix channel.

You can ask a question on the issue tracker.

Also, there is a wiki where users like you can write down their experiences and solutions – or read those of others.

Finally, you can contact one of the maintainers:

Contributing

Software is not just code, so there are many ways to make a difference:

By contributing code to libobscura, you agree to release it under the combination of licenses: LGPL 2.1 or later, or MPL 2.0. For changes to files in vidi-examples, you agree to release them under MIT or Apache-2.0 instead.

Don't be a jerk.

Maintainers' duties

The maintainers will do their best to respond to code contributions and issues within a couple days.

When reviewing a contribution of code, the maintainer will clearly say which complaints need to be addressed before the contribution is accepted, and which ones don't.

Development

As long as there is a maintainer in the above list, patches will get reviewed and issues triaged, both within a couple days. Security problems will likewise get fixed.

Adding new features will progress while the Prototype Fund funding is running (March 2025), and after that it will only happen randomly.

If you'd like to see this experiment continue at full power, you have the following options:

  • contribute code or test your hardware or offer your camera knowledge to Dorota,
  • become a maintainer (talk to Dorota),
  • offer Dorota a wad of money, or
  • offer Dorota help with other projects (Wayland input methods) to free up the time to work on this.

Hacking

Installation

There are three main ways to hack with libobscura.

Either way, you'll need to compile it, and for that, Rust is used. On Fedora:

sudo dnf -y install rust

Librem 5

Ancient version ships with PureOS. It can be replaced with a new one from rustup:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Test application

When modifying libobscura, it's best to start with a working test application.

For USB cameras, the end-to-end test is the "demo.rs".

Check out the code and build it:

git clone https://codeberg.org/libobscura/libobscura.git cargo run --example demo

Debug the library

Libobscura uses the [tracing] crate to present debug messages to the user.

Use your favorite debugging library to enable debug output. For [tracing-subscriber], add this line at the start of your program:

tracing_subscriber::fmt::init();

Builtin tools and examples generally do that. You can then enable tracing at runtime, for example:

RUST_LOG=debug cargo run --bin obscura_list

Contribute to internals

The internals are documented throughout the code base, so refer to every file in the sources. Please report a bug if anything that's missing holds your understanding.

There's an old design document in libvidi describing the general goals of the architecture. It's a little outdated and stream-of-thought-like, but maybe it helps you understand the motivations.

Making applications with libobscura

Importing

The main crate of libobscura is libvidi. This crate controls the basic properties of cameras. To include it as a dependency in your application, add this to your Cargo.toml file:

[dependencies] libvidi = { git = "https://codeberg.org/libobscura/libobscura.git", branch = "master" }

Caution! Libvidi is still developing rapidly and may introduce breaking changes at any time.

Usage

The basic objects in libobscura are:

You need to use them all to get a picture from the camera:

use vidi; // The tracker is notified about all supported cameras on the system. let cameras_tracker = vidi::actors::camera_list::spawn()?; // The list of cameras present at the moment let cameras = cameras_tracker.cameras(); // The info for the first camera on the list let camera_info = &cameras[0].0; // Create a camera device let camera = cameras_list.create(camera_info.id) .unwrap().unwrap();

The operations so far are unlikely to fail under normal circumstances. The next step, though, will fail if some other application is already using the camera:

// Take exclusive ownership of the camera on the entire system. let mut camera = camera.acquire().unwrap();

Now you're ready to start streaming and receive pictures.

Streams and buffers

Libobscura exposes two APIs to start streaming and get pictures to application developers.

One is easy, but forces you to make a copy if you want to do anything complex: it's the "borrowing" API.

The "owning" API is more powerful: it avoids copies (zero-copy) and lets you send buffers across threads. As a downside, you must re-queue your buffers back in the camera manually, so you can cause dead locks and memory leaks.

Both APIs offer the same configuration options, so choose the appropriate one.

Borrowing

The "vidi_fetch_frame.rs" example uses the easy Stream API.

// Start capturing let mut stream = camera.start( // Choose your preferred data format Config{fourcc: FourCC::new(b"YUYV"), width: 640, height: 480}, 4 ).unwrap(); loop { // Get next frame let (buf, meta, _next) = stream.next().unwrap(); let mmap = buf.memory_map_ro().unwrap(); let data = mmap.as_slice(); // process the raw pixel data here }

Note that the program will not get to the next frame if you spend too much time processing this one. This will cause frame dropping. The buffer is borrowed and it belongs to the Stream instance, so you can't send it to another thread for processing, either.

A mug with a black chain attached to its handle

Figure: A buffer borrowing API was chosen, among others, to prevent losing buffers.

Owning

The owning API gives you ownership of buffers, and expects you to return them when you're done. Because you own the buffers, you can send them between threads and even lose and leak them – in safe code (leaking is not, strictly speaking, unsafe).

See the example "vidi_shared_buffers.rs".

A table with a tea set, with a trash bin underneath. There's a spoon in the bin

Figure: The owning API does not attach buffers to the owner (stream), but the user is responsible for returning them to prevent having unuseable resources (leaks).

Examples

There are more examples in the vidi-examples crate.

More complete applications are gathered in the vidi-tools crate.

API reference

All libobscura crates with public API have their reference posted online:

Adding cameras

The camera support API has not been the focus yet.

This is quickly changing, with Librem 5 support being an upcoming target.

Status

For now, there is a pair of traits matching the needs of the UVC cameras: the minimal UnacquiredCameraImpl and the actually interesting AcquiredCameraImpl.

(Those traits are still in flux. After working on this for some time, it's not clear to me which parts should be generically implemented for every possible camera and which should call into the trait for specialized treatment. See module documentation.)

Every pipeline handler must place a CheckFn in the PIPELINES array. That function scans for cameras supported by your handler. Best see how the UVC one works.

Limitations

Currently only V4L2-based cameras are even considered. So no IP cameras and no generated streams. Those may be better handled with something like PipeWire.

Even if there is another kind of cameras worthy supporting, we're focusing on this API first.

Solver

Libobscura relies on a solver to configure the camera pipeline. The solver uses rules describing every device in the pipeline to find a path through the devices, where the output image satisfies user's constraints.

The rules are stored and described in the devices.pl file.

While the rules don't support arbitrary constraints yet (methematical operations are very limited), this works and proves that a rules-based view is useful to build pipelines.

New cameras

To support a new camera pipeline, add rules describing every V4L2 entity in the pipeline: the sensor formats, the processing nodes, and the available conversions from Mbus to FourCC.

The solver is generic and will figure out the rest.

To make sure that your configuration checks out, query the device with the obscura_configs tool, like this:

LIBOBSCURA_DEVICES_DIR=crates/vidi/config cargo run --bin obscura_configs -- --repl 'uvcvideo:Integrated Camera: Integrated C:7fffe2fe3bab469b:Camera 1'

It will list the contents of the database and let you query it as the solver does.

Internal reference

The APIs needed to add support for a new camera pipeline is in the device config file.

Likewise, the APIs to add new processing shaders to crispy are not public.

The private API reference documents all of that, and more, for every libobscura crate:

You may distriubte Libobscura under the terms of the LGPL version 2.1 or any later option, at your choice.

Copyright (C) 2023 Purism SPC Copyright © 2024 DorotaC