JavaCV实战之调用摄像头基础详解
简介
JavaCV是一个基于OpenCV的Java Wrapper,它允许Java开发人员快速简单地实现计算机视觉和图形处理任务。其中,JavaCV可以通过调用摄像头来实现很多有趣的功能。
基础流程
JavaCV实战之调用摄像头基础详解的流程大致如下:
- 配置JavaCV环境:下载并安装JavaCV(包括OpenCV的动态库),配置IDE环境。
- 初始化摄像头:选定要使用的摄像头,打开并进行初始化。
- 捕获帧数据:从摄像头中逐帧获取像素信息。
- 处理帧数据:对获取到的帧进行各种处理、操作或算法。
- 显示处理后的结果:显示最终处理后的图像或视频。
下面将详细讲解JavaCV调用摄像头的基础流程。
配置JavaCV环境
- 下载并安装JavaCV库:可以在官方网站(https://github.com/bytedeco/javacv/releases)或Maven中获取最新版本。下载并解压缩后将解压缩后的文件夹中bin文件夹下的所有文件添加到环境变量path中。
- 配置IDE:在项目配置中添加JavaCV的库文件。
初始化摄像头
- 获取摄像头列表
在JavaCV中,可以使用org.bytedeco.opencv.global.videoInputLib.videoInput
调用摄像头。通过videoInput.getDevices()
方法获取所有可用的摄像头列表,具体示例如下:
import org.bytedeco.opencv.global.videoInputLib.videoInput;
public class CameraUtil {
public static void main(String[] args) {
videoInput.getDevices();
}
}
- 初始化摄像头
选取要使用的摄像头后,通过org.bytedeco.javacpp.opencv_videoio.VideoCapture.open()
方法打开摄像头。如果打开失败,可以使用org.bytedeco.javacpp.opencv_videoio.VideoCapture.isOpened()
方法判断是否已经打开成功。具体实现如下:
import org.bytedeco.opencv.global.videoInputLib.videoInput;
import org.bytedeco.javacpp.opencv_videoio.VideoCapture;
public class CameraUtil {
static {
System.setProperty("java.library.path", "lib"); //设置动态库文件路径
}
public static void main(String[] args) {
VideoCapture camera = null;
try {
camera = new VideoCapture(0, VideoIOAPIs.V4L);
if (camera.isOpened()) {
System.out.println("Camera is opened.");
} else{
System.out.println("Can not open camera.");
}
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
camera.release();
}
}
}
捕获帧数据
在成功打开摄像头后,使用org.bytedeco.javacpp.opencv_core.Mat
对象将摄像头捕获到的每一帧图片存储下来。这需要使用org.bytedeco.javacpp.opencv_videoio.VideoCapture.read()
方法,可以参考如下的代码示例:
import org.bytedeco.javacpp.opencv_core.Mat;
import org.bytedeco.javacpp.opencv_videoio.VideoCapture;
public class CameraUtil {
public static void main(String[] args) {
VideoCapture camera = null;
try {
camera = new VideoCapture(0, VideoIOAPIs.V4L);
if (camera.isOpened()) {
System.out.println("Camera is opened.");
} else{
System.out.println("Can not open camera.");
}
Mat mat = new Mat();
while (true) {
boolean isCaptured = camera.read(mat);
if (!isCaptured) {
break;
}
// process the captured mat frame ...
}
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
camera.release();
}
}
}
处理帧数据
获取到摄像头的每一帧数据后,可以对每一帧进行诸如滤波、变形、裁剪、二值化等多种操作,具体实现根据需要进行。
显示处理后的结果
处理完成后,可以使用OpenCV或ImageView控件显示结果。具体示例如下:
import org.bytedeco.javacpp.opencv_core.Mat;
import org.bytedeco.javacpp.opencv_imgcodecs;
import org.bytedeco.javacpp.opencv_videoio.VideoCapture;
import org.opencv.highgui.*;
import javax.swing.*;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class CameraUtil {
public static void main(String[] args) {
VideoCapture camera = null;
try {
camera = new VideoCapture(0, VideoIOAPIs.V4L);
if (camera.isOpened()) {
System.out.println("Camera is opened.");
} else{
System.out.println("Can not open camera.");
}
Mat mat = new Mat();
JFrame jFrame = new JFrame();
jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jFrame.setSize(640, 480);
JLabel jLabel = new JLabel();
jFrame.setContentPane(jLabel);
jFrame.setVisible(true);
ScheduledExecutorService timerExec = Executors.newSingleThreadScheduledExecutor();
timerExec.scheduleAtFixedRate(() -> {
BufferedImage bi = toBufferedImage(mat); //Mat to BufferedImage
jLabel.setIcon(new ImageIcon(bi));
jFrame.pack();
}, 0, 1000/30, TimeUnit.MILLISECONDS);
while (true) {
boolean isCaptured = camera.read(mat);
if (!isCaptured) {
break;
}
}
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
camera.release();
}
}
private static BufferedImage toBufferedImage(Mat mat) {
if (mat != null && !mat.empty()) {
int type = BufferedImage.TYPE_BYTE_GRAY;
if (mat.channels() > 1) {
type = BufferedImage.TYPE_3BYTE_BGR;
}
int bufferSize = mat.channels() * mat.cols() * mat.rows();
byte[] bytes = new byte[bufferSize];
mat.data().get(bytes);
BufferedImage image = new BufferedImage(mat.cols(), mat.rows(), type);
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(bytes, 0, targetPixels, 0, bytes.length);
return image;
}
return null;
}
}
示例
示例1:简单的调用摄像头
import org.bytedeco.opencv.global.videoInputLib.videoInput;
import org.bytedeco.javacpp.opencv_videoio.VideoCapture;
public class CameraUtil {
public static void main(String[] args) {
VideoCapture camera = null;
try {
camera = new VideoCapture(0, VideoIOAPIs.V4L);
if (camera.isOpened()) {
System.out.println("Camera is opened.");
} else{
System.out.println("Can not open camera.");
}
Mat mat = new Mat();
while (true) {
boolean isCaptured = camera.read(mat);
if (!isCaptured) {
break;
}
}
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
camera.release();
}
}
}
示例2:在JavaFX应用程序中调用摄像头
import javafx.application.Application;
import javafx.embed.swing.SwingFXUtils;
import javafx.scene.Scene;
import javafx.scene.image.ImageView;
import javafx.scene.layout.BorderPane;
import javafx.stage.Stage;
import org.bytedeco.javacpp.opencv_core.Mat;
import org.bytedeco.javacpp.opencv_videoio.VideoCapture;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
public class CameraFX extends Application {
private final static int WIDTH = 640;
private final static int HEIGHT = 480;
@Override
public void start(Stage primaryStage) throws Exception {
VideoCapture camera = null;
try {
camera = new VideoCapture(0, VideoIOAPIs.V4L);
if (camera.isOpened()) {
System.out.println("Camera is opened.");
} else{
System.out.println("Can not open camera.");
}
Mat mat = new Mat();
ImageView imageView = new ImageView();
BorderPane borderPane = new BorderPane(imageView);
primaryStage.setScene(new Scene(borderPane));
primaryStage.show();
while (!primaryStage.isShowing()) {
Thread.sleep(1000);
}
while (true) {
boolean isCaptured = camera.read(mat);
if (!isCaptured) {
break;
}
imageView.setImage(mat2Image(mat));
}
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
camera.release();
}
}
private static BufferedImage matToImage(Mat mat) {
BufferedImage image = null;
int width = mat.cols();
int height = mat.rows();
int channels = mat.channels();
byte[] sourcePixels = new byte[width * height * channels];
mat.data().get(sourcePixels);
if (channels > 1) {
image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
} else {
image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
}
byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(sourcePixels, 0, targetPixels, 0, sourcePixels.length);
return image;
}
private static javafx.scene.image.Image mat2Image(Mat frame) {
//create a temporary buffer
byte[] b = new byte[frame.width() * frame.height() * frame.channels()];
//encode the frame in the buffer, according to the BMP format.
//Yes, OpenCV does support BMP(without telling you), although you probably would not want to use it
frame.data().get(b);
BufferedImage bi = new BufferedImage(frame.width(), frame.height(), BufferedImage.TYPE_3BYTE_BGR);
bi.getRaster().setDataElements(0, 0, frame.width(), frame.height(), b);
//convert the BufferedImage into an Image
return SwingFXUtils.toFXImage(bi, null);
}
public static void main(String[] args) {
launch(args);
}
}
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:JavaCV实战之调用摄像头基础详解 - Python技术站