Compare commits

...

14 Commits
v5.42 ... v5.44

Author SHA1 Message Date
Maxim Devaev
79987da1bf Bump version: 5.43 → 5.44 2023-10-14 01:47:24 +03:00
Maxim Devaev
05e5db09e4 fix 2023-10-12 04:19:23 +03:00
Maxim Devaev
55e432a529 Merge branch 'mp' 2023-10-12 04:13:10 +03:00
chr
4732c85ec4 Optimize JPEG scanline copy of yuv format (#235)
* opt jpeg scanline copy with yuv format

* remove unused macro
2023-10-12 04:11:34 +03:00
Michael Lynch
0ce7f28754 Correct typo on 'interval' (#236)
This fixes a minor typo on the word 'interval'.
2023-10-10 20:19:24 +03:00
Maxim Devaev
a2641dfcb6 some multiplane fixes 2023-10-10 20:13:57 +03:00
Artem
ec33425c05 Multi Planar device support (#233)
* added multi planar device support (RK3588 HDMI IN)

* sync with upstream version

* fix use local variable after free

Signed-off-by: Artem Mamonov <artyom.mamonov@gmail.com>

* request buffer length = VIDEO_MAX_PLANES for multi-planar devices

---------

Signed-off-by: Artem Mamonov <artyom.mamonov@gmail.com>
Co-authored-by: hongruichen <chraac@gmail.com>
2023-10-08 19:27:17 +03:00
Maxim Devaev
a4b4dd3932 Bump version: 5.42 → 5.43 2023-10-04 02:46:33 +03:00
Maxim Devaev
e952f787a0 moved ssl docs 2023-10-04 02:43:28 +03:00
Maxim Devaev
b3e4ea9c0f issue #230: processing any freshest valid buffer 2023-10-04 02:41:55 +03:00
Maxim Devaev
22a816b9b5 issue #230: fixed possible memory error 2023-10-04 02:41:55 +03:00
Stargirl Flowers
c96559e4ac Discard truncated JPEG frames (#230)
Hello! This patch works around an issue encountered with [ELP-USB100W03M]
cameras where they send a vast amount of invalid JPEGs when capturing
their MJPEG streams. These bad frames account for about 87% of captured
frames and cause issues for browsers and downstream applications.

Replaces #229

[ELP-USB100W03M]: https://www.webcamerausb.com/elp-10mp-free-driver-usb20-ov9712-cmos-sensor-hd-mjpeg-web-camera-board-720p-36mm-lens-p-116.html
2023-10-04 02:41:55 +03:00
Maxim Devaev
a52df47b29 skip broken frames and save only good 2023-10-04 02:41:55 +03:00
tallman5
68e7e97e74 SSL Proxy Scripts (#226)
* adding basic ssl steps

* added down the road section
2023-10-04 02:41:39 +03:00
15 changed files with 366 additions and 139 deletions

View File

@@ -1,7 +1,7 @@
[bumpversion]
commit = True
tag = True
current_version = 5.42
current_version = 5.44
parse = (?P<major>\d+)\.(?P<minor>\d+)
serialize =
{major}.{minor}

50
docs/ssl/README.md Normal file
View File

@@ -0,0 +1,50 @@
# Adding SSL
These days, browsers are not happy if you have HTTP content on an HTTPS page.
The browser will not show an HTTP stream on a page if the parent page is from a site which is using HTTPS.
The files in this folder configure an Nginx proxy in front of the µStreamer stream.
Using certbot, an SSL cert is created from Let's Encrypt and installed.
These scripts can be modified to add SSL to just about any HTTP server.
The scripts are not fire and forget.
They will require some pre-configuration and are interactive (you'll be asked questions while they're running).
They have been tested using the following setup.
1. A Raspberry Pi 4
1. µStreamer set up and running as a service
1. Internally on port 8080
1. Public port will be 5101
1. Verizon home Wi-Fi router
1. Domain registration from GoDaddy
## The Script
Below is an overview of the steps performed by `ssl-config.sh` (for Raspberry OS):
1. Install snapd - certbot uses this for installation
1. Install certbot
1. Get a free cert from Let's Encrypt using certbot
1. Install nginx
1. Configures nginx to proxy for µStreamer
## Steps
1. Create a public DNS entry.
1. Pointing to the Pi itself or the public IP of the router behind which the Pi sits.
1. This would be managed in the domain registrar, such as GoDaddy.
1. Use a subdomain, such as `webcam.domain.com`
1. Port Forwarding
1. If using a Wi-Fi router, create a port forwarding rule which passes traffic from port 80 to the Pi. This is needed for certbot to ensure your DNS entry reaches the Pi, even if your final port will be something else.
1. Create a second rule for your final setup. For example, forward traffic from the router on port 5101 to the Pi's IP port 8080.
1. Update the ustreamer-proxy file in this folder
1. Replace `your.domain.com` with a fully qualified domain, it's three places in the proxy file.
1. Modify the line `listen 5101 ssl` port if needed. This is the public port, not the port on which the µStreamer service is running
1. Modify `proxy_pass http://127.0.0.1:8080;` with the working address of the internal µStreamer service.
1. Run the script
1. Stand buy, certbot asks some basic questions, such as email, domain, agree to terms, etc.
1. `bash ssl-config.sh`
1. Test your URL!
## Down the Road
Two important points to keep in mind for the future:
1. Dynamic IP - Most routers do not have a static IP address on the WAN side. So, if you reboot your router or if your internet provider gives you a new IP, you'll have to update the DNS entry.
1. Many routers have some sort of dynamic DNS feature. This would automatically update the DNS entry for you. That functionality is outside the scope of this document.
1. SSL Renewals - certbot automatically creates a task to renew the SSL cert before it expires. Assuming the Pi is running all the time, this shouldn't be an issue.
## Enjoy!

20
docs/ssl/ssl-config.sh Normal file
View File

@@ -0,0 +1,20 @@
#!/bin/sh
echo -e "\e[32mInstalling snapd...\e[0m"
sudo apt install snapd -y
sudo snap install core
echo -e "\e[32mInstalling certbot, don't leave, it's going to ask questions...\e[0m"
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo certbot certonly --standalone
sudo certbot renew --dry-run
echo -e "\e[32mInstalling nginx...\e[0m"
sudo apt-get install nginx -y
sudo cp ustreamer-proxy /etc/nginx/sites-available/ustreamer-proxy
sudo ln -s /etc/nginx/sites-available/ustreamer-proxy /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

13
docs/ssl/ustreamer-proxy Normal file
View File

@@ -0,0 +1,13 @@
server {
listen 5101 ssl;
server_name your.domain.com;
ssl_certificate /etc/letsencrypt/live/your.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your.domain.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8080; # Change this to the uStreamer server address
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}

View File

@@ -1,6 +1,6 @@
.\" Manpage for ustreamer-dump.
.\" Open an issue or pull request to https://github.com/pikvm/ustreamer to correct errors or typos
.TH USTREAMER-DUMP 1 "version 5.42" "January 2021"
.TH USTREAMER-DUMP 1 "version 5.44" "January 2021"
.SH NAME
ustreamer-dump \- Dump uStreamer's memory sink to file

View File

@@ -1,6 +1,6 @@
.\" Manpage for ustreamer.
.\" Open an issue or pull request to https://github.com/pikvm/ustreamer to correct errors or typos
.TH USTREAMER 1 "version 5.42" "November 2020"
.TH USTREAMER 1 "version 5.44" "November 2020"
.SH NAME
ustreamer \- stream MJPEG video from any V4L2 device to the network
@@ -248,7 +248,7 @@ Timeout for lock. Default: 1.
H264 bitrate in Kbps. Default: 5000.
.TP
.BR \-\-h264\-gop\ \fIN
Intarval between keyframes. Default: 30.
Interval between keyframes. Default: 30.
.TP
.BR \-\-h264\-m2m\-device\ \fI/dev/path
Path to V4L2 mem-to-mem encoder device. Default: auto-select.

View File

@@ -3,7 +3,7 @@
pkgname=ustreamer
pkgver=5.42
pkgver=5.44
pkgrel=1
pkgdesc="Lightweight and fast MJPEG-HTTP streamer"
url="https://github.com/pikvm/ustreamer"

View File

@@ -6,7 +6,7 @@
include $(TOPDIR)/rules.mk
PKG_NAME:=ustreamer
PKG_VERSION:=5.42
PKG_VERSION:=5.44
PKG_RELEASE:=1
PKG_MAINTAINER:=Maxim Devaev <mdevaev@gmail.com>

View File

@@ -17,7 +17,7 @@ def _find_sources(suffix: str) -> list[str]:
if __name__ == "__main__":
setup(
name="ustreamer",
version="5.42",
version="5.44",
description="uStreamer tools",
author="Maxim Devaev",
author_email="mdevaev@gmail.com",

View File

@@ -23,7 +23,7 @@
#pragma once
#define US_VERSION_MAJOR 5
#define US_VERSION_MINOR 42
#define US_VERSION_MINOR 44
#define US_MAKE_VERSION2(_major, _minor) #_major "." #_minor
#define US_MAKE_VERSION1(_major, _minor) US_MAKE_VERSION2(_major, _minor)

View File

@@ -75,6 +75,7 @@ unsigned us_frame_get_padding(const us_frame_s *frame) {
case V4L2_PIX_FMT_YUYV:
case V4L2_PIX_FMT_UYVY:
case V4L2_PIX_FMT_RGB565: bytes_per_pixel = 2; break;
case V4L2_PIX_FMT_BGR24:
case V4L2_PIX_FMT_RGB24: bytes_per_pixel = 3; break;
// case V4L2_PIX_FMT_H264:
case V4L2_PIX_FMT_MJPEG:

View File

@@ -41,6 +41,7 @@ static const struct {
{"UYVY", V4L2_PIX_FMT_UYVY},
{"RGB565", V4L2_PIX_FMT_RGB565},
{"RGB24", V4L2_PIX_FMT_RGB24},
{"BGR24", V4L2_PIX_FMT_BGR24},
{"MJPEG", V4L2_PIX_FMT_MJPEG},
{"JPEG", V4L2_PIX_FMT_JPEG},
};
@@ -54,6 +55,8 @@ static const struct {
};
static void _v4l2_buffer_copy(const struct v4l2_buffer *src, struct v4l2_buffer *dest);
static bool _device_is_buffer_valid(us_device_s *dev, const struct v4l2_buffer *buf, const uint8_t *data);
static int _device_open_check_cap(us_device_s *dev);
static int _device_open_dv_timings(us_device_s *dev);
static int _device_apply_dv_timings(us_device_s *dev);
@@ -82,6 +85,7 @@ static const char *_io_method_to_string_supported(enum v4l2_memory io_method);
#define _RUN(x_next) dev->run->x_next
#define _D_XIOCTL(...) us_xioctl(_RUN(fd), __VA_ARGS__)
#define _D_IS_MPLANE (_RUN(capture_type) == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
us_device_s *us_device_init(void) {
@@ -194,6 +198,10 @@ void us_device_close(us_device_s *dev) {
US_DELETE(HW(raw.data), free);
}
if (_D_IS_MPLANE) {
free(HW(buf.m.planes));
}
# undef HW
}
_RUN(n_bufs) = 0;
@@ -217,7 +225,7 @@ int us_device_export_to_dma(us_device_s *dev) {
for (unsigned index = 0; index < _RUN(n_bufs); ++index) {
struct v4l2_exportbuffer exp = {0};
exp.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
exp.type = _RUN(capture_type);
exp.index = index;
US_LOG_DEBUG("Exporting device buffer=%u to DMA ...", index);
@@ -244,7 +252,7 @@ int us_device_export_to_dma(us_device_s *dev) {
int us_device_switch_capturing(us_device_s *dev, bool enable) {
if (enable != _RUN(capturing)) {
enum v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
enum v4l2_buf_type type = _RUN(capture_type);
US_LOG_DEBUG("%s device capturing ...", (enable ? "Starting" : "Stopping"));
if (_D_XIOCTL((enable ? VIDIOC_STREAMON : VIDIOC_STREAMOFF), &type) < 0) {
@@ -310,67 +318,90 @@ int us_device_grab_buffer(us_device_s *dev, us_hw_buffer_s **hw) {
*hw = NULL;
struct v4l2_buffer buf = {0};
struct v4l2_plane buf_planes[VIDEO_MAX_PLANES] = {0};
if (_D_IS_MPLANE) {
// Just for _v4l2_buffer_copy(), buf.length is not needed here
buf.m.planes = buf_planes;
}
bool buf_got = false;
unsigned skipped = 0;
bool broken = false;
US_LOG_DEBUG("Grabbing device buffer ...");
do {
struct v4l2_buffer new = {0};
new.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
struct v4l2_plane new_planes[VIDEO_MAX_PLANES] = {0};
new.type = _RUN(capture_type);
new.memory = dev->io_method;
if (_D_IS_MPLANE) {
new.length = VIDEO_MAX_PLANES;
new.m.planes = new_planes;
}
const bool new_got = (_D_XIOCTL(VIDIOC_DQBUF, &new) >= 0);
if (new_got) {
if (new.index >= _RUN(n_bufs)) {
US_LOG_ERROR("V4L2 error: grabbed invalid device buffer=%u, n_bufs=%u", new.index, _RUN(n_bufs));
return -1;
}
# define GRABBED(x_buf) _RUN(hw_bufs)[x_buf.index].grabbed
# define FRAME_DATA(x_buf) _RUN(hw_bufs)[x_buf.index].raw.data
if (GRABBED(new)) {
US_LOG_ERROR("V4L2 error: grabbed device buffer=%u is already used", new.index);
return -1;
}
GRABBED(new) = true;
if (_D_IS_MPLANE) {
new.bytesused = new.m.planes[0].bytesused;
}
broken = !_device_is_buffer_valid(dev, &new, FRAME_DATA(new));
if (broken) {
US_LOG_DEBUG("Releasing device buffer=%u (broken frame) ...", new.index);
if (_D_XIOCTL(VIDIOC_QBUF, &new) < 0) {
US_LOG_PERROR("Can't release device buffer=%u (broken frame)", new.index);
return -1;
}
GRABBED(new) = false;
continue;
}
if (buf_got) {
if (_D_XIOCTL(VIDIOC_QBUF, &buf) < 0) {
US_LOG_PERROR("Can't release device buffer=%u (skipped frame)", buf.index);
return -1;
}
GRABBED(buf) = false;
++skipped;
// buf_got = false;
}
memcpy(&buf, &new, sizeof(struct v4l2_buffer));
# undef GRABBED
# undef FRAME_DATA
_v4l2_buffer_copy(&new, &buf);
buf_got = true;
} else {
if (buf_got && errno == EAGAIN) {
break;
} else {
US_LOG_PERROR("Can't grab device buffer");
return -1;
if (errno == EAGAIN) {
if (buf_got) {
break; // Process any latest valid frame
} else if (broken) {
return -2; // If we have only broken frames on this capture session
}
}
US_LOG_PERROR("Can't grab device buffer");
return -1;
}
} while (true);
if (buf.index >= _RUN(n_bufs)) {
US_LOG_ERROR("V4L2 error: grabbed invalid device buffer=%u, n_bufs=%u", buf.index, _RUN(n_bufs));
return -1;
}
// Workaround for broken, corrupted frames:
// Under low light conditions corrupted frames may get captured.
// The good thing is such frames are quite small compared to the regular frames.
// For example a VGA (640x480) webcam frame is normally >= 8kByte large,
// corrupted frames are smaller.
if (buf.bytesused < dev->min_frame_size) {
US_LOG_DEBUG("Dropped too small frame, assuming it was broken: buffer=%u, bytesused=%u",
buf.index, buf.bytesused);
US_LOG_DEBUG("Releasing device buffer=%u (broken frame) ...", buf.index);
if (_D_XIOCTL(VIDIOC_QBUF, &buf) < 0) {
US_LOG_PERROR("Can't release device buffer=%u (broken frame)", buf.index);
return -1;
}
return -2;
}
# define HW(x_next) _RUN(hw_bufs)[buf.index].x_next
if (HW(grabbed)) {
US_LOG_ERROR("V4L2 error: grabbed device buffer=%u is already used", buf.index);
return -1;
}
HW(grabbed) = true;
HW(raw.dma_fd) = HW(dma_fd);
HW(raw.used) = buf.bytesused;
HW(raw.width) = _RUN(width);
@@ -378,12 +409,12 @@ int us_device_grab_buffer(us_device_s *dev, us_hw_buffer_s **hw) {
HW(raw.format) = _RUN(format);
HW(raw.stride) = _RUN(stride);
HW(raw.online) = true;
memcpy(&HW(buf), &buf, sizeof(struct v4l2_buffer));
HW(raw.grab_ts)= (long double)((buf.timestamp.tv_sec * (uint64_t)1000) + (buf.timestamp.tv_usec / 1000)) / 1000;
_v4l2_buffer_copy(&buf, &HW(buf));
HW(raw.grab_ts) = (long double)((buf.timestamp.tv_sec * (uint64_t)1000) + (buf.timestamp.tv_usec / 1000)) / 1000;
US_LOG_DEBUG("Grabbed new frame: buffer=%u, bytesused=%u, grab_ts=%.3Lf, latency=%.3Lf, skipped=%u",
buf.index, buf.bytesused, HW(raw.grab_ts), us_get_now_monotonic() - HW(raw.grab_ts), skipped);
# undef HW
*hw = &_RUN(hw_bufs[buf.index]);
return buf.index;
}
@@ -419,6 +450,55 @@ int us_device_consume_event(us_device_s *dev) {
return 0;
}
static void _v4l2_buffer_copy(const struct v4l2_buffer *src, struct v4l2_buffer *dest) {
struct v4l2_plane *dest_planes = dest->m.planes;
memcpy(dest, src, sizeof(struct v4l2_buffer));
if (src->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
assert(dest_planes);
dest->m.planes = dest_planes;
memcpy(dest->m.planes, src->m.planes, sizeof(struct v4l2_plane) * VIDEO_MAX_PLANES);
}
}
bool _device_is_buffer_valid(us_device_s *dev, const struct v4l2_buffer *buf, const uint8_t *data) {
// Workaround for broken, corrupted frames:
// Under low light conditions corrupted frames may get captured.
// The good thing is such frames are quite small compared to the regular frames.
// For example a VGA (640x480) webcam frame is normally >= 8kByte large,
// corrupted frames are smaller.
if (buf->bytesused < dev->min_frame_size) {
US_LOG_DEBUG("Dropped too small frame, assuming it was broken: buffer=%u, bytesused=%u",
buf->index, buf->bytesused);
return false;
}
// Workaround for truncated JPEG frames:
// Some inexpensive CCTV-style USB webcams such as the ELP-USB100W03M send
// large amounts of these frames when using MJPEG streams. Checks that the
// buffer ends with either the JPEG end of image marker (0xFFD9), the last
// marker byte plus a padding byte (0xD900), or just padding bytes (0x0000)
// A more sophisticated method would scan for the end of image marker, but
// that takes precious CPU cycles and this should be good enough for most
// cases.
if (us_is_jpeg(dev->run->format)) {
if (buf->bytesused < 125) {
// https://stackoverflow.com/questions/2253404/what-is-the-smallest-valid-jpeg-file-size-in-bytes
US_LOG_DEBUG("Discarding invalid frame, too small to be a valid JPEG: bytesused=%u", buf->bytesused);
return false;
}
const uint8_t *const end_ptr = data + buf->bytesused;
const uint8_t *const eoi_ptr = end_ptr - 2;
const uint16_t eoi_marker = (((uint16_t)(eoi_ptr[0]) << 8) | eoi_ptr[1]);
if (eoi_marker != 0xFFD9 && eoi_marker != 0xD900 && eoi_marker != 0x0000) {
US_LOG_DEBUG("Discarding truncated JPEG frame: eoi_marker=0x%04x, bytesused=%u", eoi_marker, buf->bytesused);
return false;
}
}
return true;
}
static int _device_open_check_cap(us_device_s *dev) {
struct v4l2_capability cap = {0};
@@ -428,7 +508,13 @@ static int _device_open_check_cap(us_device_s *dev) {
return -1;
}
if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
if (cap.capabilities & V4L2_CAP_VIDEO_CAPTURE) {
_RUN(capture_type) = V4L2_BUF_TYPE_VIDEO_CAPTURE;
US_LOG_INFO("Using capture type: single-planar");
} else if (cap.capabilities & V4L2_CAP_VIDEO_CAPTURE_MPLANE) {
_RUN(capture_type) = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
US_LOG_INFO("Using capture type: multi-planar");
} else {
US_LOG_ERROR("Video capture is not supported by device");
return -1;
}
@@ -438,11 +524,13 @@ static int _device_open_check_cap(us_device_s *dev) {
return -1;
}
int input = dev->input; // Needs a pointer to int for ioctl()
US_LOG_INFO("Using input channel: %d", input);
if (_D_XIOCTL(VIDIOC_S_INPUT, &input) < 0) {
US_LOG_ERROR("Can't set input channel");
return -1;
if (!_D_IS_MPLANE) {
int input = dev->input; // Needs a pointer to int for ioctl()
US_LOG_INFO("Using input channel: %d", input);
if (_D_XIOCTL(VIDIOC_S_INPUT, &input) < 0) {
US_LOG_ERROR("Can't set input channel");
return -1;
}
}
if (dev->standard != V4L2_STD_UNKNOWN) {
@@ -520,16 +608,25 @@ static int _device_apply_dv_timings(us_device_s *dev) {
return 0;
}
static int _device_open_format(us_device_s *dev, bool first) {
static int _device_open_format(us_device_s *dev, bool first) { // FIXME
const unsigned stride = us_align_size(_RUN(width), 32) << 1;
struct v4l2_format fmt = {0};
fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
fmt.fmt.pix.width = _RUN(width);
fmt.fmt.pix.height = _RUN(height);
fmt.fmt.pix.pixelformat = dev->format;
fmt.fmt.pix.field = V4L2_FIELD_ANY;
fmt.fmt.pix.bytesperline = stride;
fmt.type = _RUN(capture_type);
if (_D_IS_MPLANE) {
fmt.fmt.pix_mp.width = _RUN(width);
fmt.fmt.pix_mp.height = _RUN(height);
fmt.fmt.pix_mp.pixelformat = dev->format;
fmt.fmt.pix_mp.field = V4L2_FIELD_ANY;
fmt.fmt.pix_mp.flags = 0;
fmt.fmt.pix_mp.num_planes = 1;
} else {
fmt.fmt.pix.width = _RUN(width);
fmt.fmt.pix.height = _RUN(height);
fmt.fmt.pix.pixelformat = dev->format;
fmt.fmt.pix.field = V4L2_FIELD_ANY;
fmt.fmt.pix.bytesperline = stride;
}
// Set format
US_LOG_DEBUG("Probing device format=%s, stride=%u, resolution=%ux%u ...",
@@ -539,13 +636,21 @@ static int _device_open_format(us_device_s *dev, bool first) {
return -1;
}
if (fmt.type != _RUN(capture_type)) {
US_LOG_ERROR("Capture format mismatch, please report to the developer");
return -1;
}
# define FMT(x_next) (_D_IS_MPLANE ? fmt.fmt.pix_mp.x_next : fmt.fmt.pix.x_next)
# define FMTS(x_next) (_D_IS_MPLANE ? fmt.fmt.pix_mp.plane_fmt[0].x_next : fmt.fmt.pix.x_next)
// Check resolution
bool retry = false;
if (fmt.fmt.pix.width != _RUN(width) || fmt.fmt.pix.height != _RUN(height)) {
if (FMT(width) != _RUN(width) || FMT(height) != _RUN(height)) {
US_LOG_ERROR("Requested resolution=%ux%u is unavailable", _RUN(width), _RUN(height));
retry = true;
}
if (_device_apply_resolution(dev, fmt.fmt.pix.width, fmt.fmt.pix.height) < 0) {
if (_device_apply_resolution(dev, FMT(width), FMT(height)) < 0) {
return -1;
}
if (first && retry) {
@@ -554,27 +659,32 @@ static int _device_open_format(us_device_s *dev, bool first) {
US_LOG_INFO("Using resolution: %ux%u", _RUN(width), _RUN(height));
// Check format
if (fmt.fmt.pix.pixelformat != dev->format) {
if (FMT(pixelformat) != dev->format) {
US_LOG_ERROR("Could not obtain the requested format=%s; driver gave us %s",
_format_to_string_supported(dev->format),
_format_to_string_supported(fmt.fmt.pix.pixelformat));
_format_to_string_supported(FMT(pixelformat)));
char *format_str;
if ((format_str = (char *)_format_to_string_nullable(fmt.fmt.pix.pixelformat)) != NULL) {
if ((format_str = (char *)_format_to_string_nullable(FMT(pixelformat))) != NULL) {
US_LOG_INFO("Falling back to format=%s", format_str);
} else {
char fourcc_str[8];
US_LOG_ERROR("Unsupported format=%s (fourcc)",
us_fourcc_to_string(fmt.fmt.pix.pixelformat, fourcc_str, 8));
us_fourcc_to_string(FMT(pixelformat), fourcc_str, 8));
return -1;
}
}
_RUN(format) = fmt.fmt.pix.pixelformat;
_RUN(format) = FMT(pixelformat);
US_LOG_INFO("Using format: %s", _format_to_string_supported(_RUN(format)));
_RUN(stride) = fmt.fmt.pix.bytesperline;
_RUN(raw_size) = fmt.fmt.pix.sizeimage; // Only for userptr
_RUN(stride) = FMTS(bytesperline);
_RUN(raw_size) = FMTS(sizeimage); // Only for userptr
# undef FMTS
# undef FMT
return 0;
}
@@ -582,7 +692,7 @@ static void _device_open_hw_fps(us_device_s *dev) {
_RUN(hw_fps) = 0;
struct v4l2_streamparm setfps = {0};
setfps.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
setfps.type = _RUN(capture_type);
US_LOG_DEBUG("Querying HW FPS ...");
if (_D_XIOCTL(VIDIOC_G_PARM, &setfps) < 0) {
@@ -602,7 +712,7 @@ static void _device_open_hw_fps(us_device_s *dev) {
# define SETFPS_TPF(x_next) setfps.parm.capture.timeperframe.x_next
US_MEMSET_ZERO(setfps);
setfps.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
setfps.type = _RUN(capture_type);
SETFPS_TPF(numerator) = 1;
SETFPS_TPF(denominator) = (dev->desired_fps == 0 ? 255 : dev->desired_fps);
@@ -665,7 +775,7 @@ static int _device_open_io_method(us_device_s *dev) {
static int _device_open_io_method_mmap(us_device_s *dev) {
struct v4l2_requestbuffers req = {0};
req.count = dev->n_bufs;
req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
req.type = _RUN(capture_type);
req.memory = V4L2_MEMORY_MMAP;
US_LOG_DEBUG("Requesting %u device buffers for MMAP ...", req.count);
@@ -686,9 +796,14 @@ static int _device_open_io_method_mmap(us_device_s *dev) {
US_CALLOC(_RUN(hw_bufs), req.count);
for (_RUN(n_bufs) = 0; _RUN(n_bufs) < req.count; ++_RUN(n_bufs)) {
struct v4l2_buffer buf = {0};
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
struct v4l2_plane planes[VIDEO_MAX_PLANES] = {0};
buf.type = _RUN(capture_type);
buf.memory = V4L2_MEMORY_MMAP;
buf.index = _RUN(n_bufs);
if (_D_IS_MPLANE) {
buf.m.planes = planes;
buf.length = VIDEO_MAX_PLANES;
}
US_LOG_DEBUG("Calling us_xioctl(VIDIOC_QUERYBUF) for device buffer=%u ...", _RUN(n_bufs));
if (_D_XIOCTL(VIDIOC_QUERYBUF, &buf) < 0) {
@@ -700,20 +815,28 @@ static int _device_open_io_method_mmap(us_device_s *dev) {
HW(dma_fd) = -1;
const size_t buf_size = (_D_IS_MPLANE ? buf.m.planes[0].length : buf.length);
const off_t buf_offset = (_D_IS_MPLANE ? buf.m.planes[0].m.mem_offset : buf.m.offset);
US_LOG_DEBUG("Mapping device buffer=%u ...", _RUN(n_bufs));
if ((HW(raw.data) = mmap(
NULL,
buf.length,
buf_size,
PROT_READ | PROT_WRITE,
MAP_SHARED,
_RUN(fd),
buf.m.offset
buf_offset
)) == MAP_FAILED) {
US_LOG_PERROR("Can't map device buffer=%u", _RUN(n_bufs));
return -1;
}
assert(HW(raw.data) != NULL);
HW(raw.allocated) = buf.length;
HW(raw.allocated) = buf_size;
if (_D_IS_MPLANE) {
US_CALLOC(HW(buf.m.planes), VIDEO_MAX_PLANES);
}
# undef HW
}
@@ -723,7 +846,7 @@ static int _device_open_io_method_mmap(us_device_s *dev) {
static int _device_open_io_method_userptr(us_device_s *dev) {
struct v4l2_requestbuffers req = {0};
req.count = dev->n_bufs;
req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
req.type = _RUN(capture_type);
req.memory = V4L2_MEMORY_USERPTR;
US_LOG_DEBUG("Requesting %u device buffers for USERPTR ...", req.count);
@@ -751,6 +874,9 @@ static int _device_open_io_method_userptr(us_device_s *dev) {
assert((HW(raw.data) = aligned_alloc(page_size, buf_size)) != NULL);
memset(HW(raw.data), 0, buf_size);
HW(raw.allocated) = buf_size;
if (_D_IS_MPLANE) {
US_CALLOC(HW(buf.m.planes), VIDEO_MAX_PLANES);
}
# undef HW
}
return 0;
@@ -759,10 +885,18 @@ static int _device_open_io_method_userptr(us_device_s *dev) {
static int _device_open_queue_buffers(us_device_s *dev) {
for (unsigned index = 0; index < _RUN(n_bufs); ++index) {
struct v4l2_buffer buf = {0};
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
struct v4l2_plane planes[VIDEO_MAX_PLANES] = {0};
buf.type = _RUN(capture_type);
buf.memory = dev->io_method;
buf.index = index;
if (_D_IS_MPLANE) {
buf.m.planes = planes;
buf.length = 1;
}
if (dev->io_method == V4L2_MEMORY_USERPTR) {
// I am not sure, may be this is incorrect for mplane device,
// but i don't have one which supports V4L2_MEMORY_USERPTR
buf.m.userptr = (unsigned long)_RUN(hw_bufs)[index].raw.data;
buf.length = _RUN(hw_bufs)[index].raw.allocated;
}

View File

@@ -61,7 +61,7 @@
#define US_STANDARDS_STR "PAL, NTSC, SECAM"
#define US_FORMAT_UNKNOWN -1
#define US_FORMATS_STR "YUYV, UYVY, RGB565, RGB24, MJPEG, JPEG"
#define US_FORMATS_STR "YUYV, UYVY, RGB565, RGB24, BGR24, MJPEG, JPEG"
#define US_IO_METHOD_UNKNOWN -1
#define US_IO_METHODS_STR "MMAP, USERPTR"
@@ -75,18 +75,19 @@ typedef struct {
} us_hw_buffer_s;
typedef struct {
int fd;
unsigned width;
unsigned height;
unsigned format;
unsigned stride;
unsigned hw_fps;
unsigned jpeg_quality;
size_t raw_size;
unsigned n_bufs;
us_hw_buffer_s *hw_bufs;
bool capturing;
bool persistent_timeout_reported;
int fd;
unsigned width;
unsigned height;
unsigned format;
unsigned stride;
unsigned hw_fps;
unsigned jpeg_quality;
size_t raw_size;
unsigned n_bufs;
us_hw_buffer_s *hw_bufs;
enum v4l2_buf_type capture_type;
bool capturing;
bool persistent_timeout_reported;
} us_device_runtime_s;
typedef enum {
@@ -132,9 +133,7 @@ typedef struct {
size_t min_frame_size;
bool persistent;
unsigned timeout;
us_controls_s ctl;
us_device_runtime_s *run;
} us_device_s;

View File

@@ -41,6 +41,7 @@ static void _jpeg_write_scanlines_yuyv(struct jpeg_compress_struct *jpeg, const
static void _jpeg_write_scanlines_uyvy(struct jpeg_compress_struct *jpeg, const us_frame_s *frame);
static void _jpeg_write_scanlines_rgb565(struct jpeg_compress_struct *jpeg, const us_frame_s *frame);
static void _jpeg_write_scanlines_rgb24(struct jpeg_compress_struct *jpeg, const us_frame_s *frame);
static void _jpeg_write_scanlines_bgr24(struct jpeg_compress_struct *jpeg, const us_frame_s *frame);
static void _jpeg_init_destination(j_compress_ptr jpeg);
static boolean _jpeg_empty_output_buffer(j_compress_ptr jpeg);
@@ -63,7 +64,7 @@ void us_cpu_encoder_compress(const us_frame_s *src, us_frame_s *dest, unsigned q
jpeg.image_width = src->width;
jpeg.image_height = src->height;
jpeg.input_components = 3;
jpeg.in_color_space = JCS_RGB;
jpeg.in_color_space = ((src->format == V4L2_PIX_FMT_YUYV || src->format == V4L2_PIX_FMT_UYVY) ? JCS_YCbCr : JCS_RGB);
jpeg_set_defaults(&jpeg);
jpeg_set_quality(&jpeg, quality, TRUE);
@@ -79,6 +80,7 @@ void us_cpu_encoder_compress(const us_frame_s *src, us_frame_s *dest, unsigned q
WRITE_SCANLINES(V4L2_PIX_FMT_UYVY, _jpeg_write_scanlines_uyvy);
WRITE_SCANLINES(V4L2_PIX_FMT_RGB565, _jpeg_write_scanlines_rgb565);
WRITE_SCANLINES(V4L2_PIX_FMT_RGB24, _jpeg_write_scanlines_rgb24);
WRITE_SCANLINES(V4L2_PIX_FMT_BGR24, _jpeg_write_scanlines_bgr24);
default: assert(0 && "Unsupported input format for CPU encoder");
}
@@ -106,39 +108,29 @@ static void _jpeg_set_dest_frame(j_compress_ptr jpeg, us_frame_s *frame) {
frame->used = 0;
}
#define YUV_R(_y, _, _v) (((_y) + (359 * (_v))) >> 8)
#define YUV_G(_y, _u, _v) (((_y) - (88 * (_u)) - (183 * (_v))) >> 8)
#define YUV_B(_y, _u, _) (((_y) + (454 * (_u))) >> 8)
#define NORM_COMPONENT(_x) (((_x) > 255) ? 255 : (((_x) < 0) ? 0 : (_x)))
static void _jpeg_write_scanlines_yuyv(struct jpeg_compress_struct *jpeg, const us_frame_s *frame) {
uint8_t *line_buf;
US_CALLOC(line_buf, frame->width * 3);
const unsigned padding = us_frame_get_padding(frame);
const uint8_t *data = frame->data;
unsigned z = 0;
while (jpeg->next_scanline < frame->height) {
uint8_t *ptr = line_buf;
for (unsigned x = 0; x < frame->width; ++x) {
const int y = (!z ? data[0] << 8 : data[2] << 8);
const int u = data[1] - 128;
const int v = data[3] - 128;
// See also: https://www.kernel.org/doc/html/v4.8/media/uapi/v4l/pixfmt-yuyv.html
const bool is_odd_pixel = x & 1;
const uint8_t y = data[is_odd_pixel ? 2 : 0];
const uint8_t u = data[1];
const uint8_t v = data[3];
const int r = YUV_R(y, u, v);
const int g = YUV_G(y, u, v);
const int b = YUV_B(y, u, v);
ptr[0] = y;
ptr[1] = u;
ptr[2] = v;
ptr += 3;
*(ptr++) = NORM_COMPONENT(r);
*(ptr++) = NORM_COMPONENT(g);
*(ptr++) = NORM_COMPONENT(b);
if (z++) {
z = 0;
data += 4;
}
data += (is_odd_pixel ? 4: 0);
}
data += padding;
@@ -155,28 +147,23 @@ static void _jpeg_write_scanlines_uyvy(struct jpeg_compress_struct *jpeg, const
const unsigned padding = us_frame_get_padding(frame);
const uint8_t *data = frame->data;
unsigned z = 0;
while (jpeg->next_scanline < frame->height) {
uint8_t *ptr = line_buf;
for (unsigned x = 0; x < frame->width; ++x) {
const int y = (!z ? data[1] << 8 : data[3] << 8);
const int u = data[0] - 128;
const int v = data[2] - 128;
// See also: https://www.kernel.org/doc/html/v4.8/media/uapi/v4l/pixfmt-uyvy.html
const bool is_odd_pixel = x & 1;
const uint8_t y = data[is_odd_pixel ? 3 : 1];
const uint8_t u = data[0];
const uint8_t v = data[2];
const int r = YUV_R(y, u, v);
const int g = YUV_G(y, u, v);
const int b = YUV_B(y, u, v);
ptr[0] = y;
ptr[1] = u;
ptr[2] = v;
ptr += 3;
*(ptr++) = NORM_COMPONENT(r);
*(ptr++) = NORM_COMPONENT(g);
*(ptr++) = NORM_COMPONENT(b);
if (z++) {
z = 0;
data += 4;
}
data += (is_odd_pixel ? 4 : 0);
}
data += padding;
@@ -187,11 +174,6 @@ static void _jpeg_write_scanlines_uyvy(struct jpeg_compress_struct *jpeg, const
free(line_buf);
}
#undef NORM_COMPONENT
#undef YUV_B
#undef YUV_G
#undef YUV_R
static void _jpeg_write_scanlines_rgb565(struct jpeg_compress_struct *jpeg, const us_frame_s *frame) {
uint8_t *line_buf;
US_CALLOC(line_buf, frame->width * 3);
@@ -205,9 +187,10 @@ static void _jpeg_write_scanlines_rgb565(struct jpeg_compress_struct *jpeg, cons
for (unsigned x = 0; x < frame->width; ++x) {
const unsigned int two_byte = (data[1] << 8) + data[0];
*(ptr++) = data[1] & 248; // Red
*(ptr++) = (uint8_t)((two_byte & 2016) >> 3); // Green
*(ptr++) = (data[0] & 31) * 8; // Blue
ptr[0] = data[1] & 248; // Red
ptr[1] = (uint8_t)((two_byte & 2016) >> 3); // Green
ptr[2] = (data[0] & 31) * 8; // Blue
ptr += 3;
data += 2;
}
@@ -232,6 +215,33 @@ static void _jpeg_write_scanlines_rgb24(struct jpeg_compress_struct *jpeg, const
}
}
static void _jpeg_write_scanlines_bgr24(struct jpeg_compress_struct *jpeg, const us_frame_s *frame) {
uint8_t *line_buf;
US_CALLOC(line_buf, frame->width * 3);
const unsigned padding = us_frame_get_padding(frame);
uint8_t *data = frame->data;
while (jpeg->next_scanline < frame->height) {
uint8_t *ptr = line_buf;
// swap B and R values
for (unsigned x = 0; x < frame->width * 3; x += 3) {
ptr[0] = data[x + 2];
ptr[1] = data[x + 1];
ptr[2] = data[x];
ptr += 3;
}
JSAMPROW scanlines[1] = {line_buf};
jpeg_write_scanlines(jpeg, scanlines, 1);
data += (frame->width * 3) + padding;
}
free(line_buf);
}
#define JPEG_OUTPUT_BUFFER_SIZE ((size_t)4096)
static void _jpeg_init_destination(j_compress_ptr jpeg) {

View File

@@ -696,7 +696,7 @@ static void _help(FILE *fp, const us_device_s *dev, const us_encoder_s *enc, con
ADD_SINK("RAW", "raw-")
ADD_SINK("H264", "h264-")
SAY(" --h264-bitrate <kbps> ───────── H264 bitrate in Kbps. Default: %u.\n", stream->h264_bitrate);
SAY(" --h264-gop <N> ──────────────── Intarval between keyframes. Default: %u.\n", stream->h264_gop);
SAY(" --h264-gop <N> ──────────────── Interval between keyframes. Default: %u.\n", stream->h264_gop);
SAY(" --h264-m2m-device </dev/path> ─ Path to V4L2 M2M encoder device. Default: auto select.\n");
# undef ADD_SINK
# ifdef WITH_GPIO