【哪吒开发板试用】+YOLO-obb应用
原文链接:【哪吒开发板试用】+YOLO-obb应用_哪吒开发套件_weixin_44397923-英特尔开发套件专区
随着工业4.0的到来,工业上使用深度学习越来越频繁,为了更好的适应市场的发展和行业的
发展,参考著名开源作者颜国进英特尔边缘计算创新大使的项目,结合自己对行业的了解。开发下面项目案例
一、什么是YOLO
YOLO(You Only Look Once)是一种流行的目标检测算法,它的特点是实时性能较强。
传统的目标检测算法通常分为两个阶段:首先使用区域提取方法生成候选目标区域,然后对这些候选区域进行分类和边界框回归
这种两阶段的方法在速度上较慢,尤其对于实时应用而言不太适用
YOLO通过将目标检测任务转化为回归问题,将输入图像分成一个固定大小的网格,并预测每个网格中可能存在目标的边界框和类别
对于候选框的提取和分类,只需要进行一次前向传递,大大提高了检测速度
经过行业的不断发展从YOLOV1到现在发布不到一个月的YOLOV11所谓是发展只迅速。
二、哪吒开发板
产品简介
哪吒(Nezha)开发套件以信用卡大小(85 x 56mm)的开发板-哪吒(Nezha)为核心,哪吒采用Intel® N97处理器(Alder Lake-N),最大睿频3.6GHz,Intel® UHD Graphics内核GPU,可实现高分辨率显示;板载LPDDR5内存、eMMC存储及TPM 2.0,配备GPIO接口,支持Windows和Linux操作系统,这些功能和无风扇散热方式相结合,为各种应用程序构建高效的解决方案,适用于如自动化、物联网**、数字标牌和机器人等应用。
该开发板是类树莓派的x86主机,可支持Linux Ubuntu及完整版Windows操作系统。板载英特尔 N97处理器,最高运行频率可达3.6 GHz,且内置显卡(iGPU),板载 64GB eMMC存储及LPDDR5 4800MHz(4GB/8GB),支持USB 3.0、HDMI视频输出、3.5mm音频接口、1000Mbps以太网口。完全可把它作为一台mini小电脑来看待,且其可外接Arduino,STM32等单片机,扩展更多应用及各种传感器模块。
此外, 其主要接口与Jetson Nano载板兼容,GPIO与树莓派兼容,能够最大限度地复用树莓派、Jetson Nano等生态资源,无论是自动化、物联网**、数字标牌或是摄像头物体识别、3D打印,还是CNC实时插补控制都能稳定运行。可作为边缘计算引擎用于人工智能产品验证、开发;也可以作为域控核心用于机器人产品开发。
开箱图像如下(官方还贴心的给我们准备了无线网卡方便我们快速使用开发板,贴心官方 给个大大的赞)
三、YOLOV8-OBB模型部署与推理
1、OpenVINO介绍
OpenVINO™ 是一个开源工具套件,用于对深度学习模型进行优化并在云端、边缘进行部署。它能在诸如生成式人工智能、
视频、音频以及语言等各类应用场景中加快深度学习推理的速度,且支持来自 PyTorch、TensorFlow、ONNX 等热门框架
的模型。实现模型的转换与优化,并在包括 Intel®硬件及各种环境(本地、设备端、浏览器或者云端)中进行部署。
2、使用OpenVINO对YOLOV8-obb模型推理
参考 GitHub - guojin-yan/OpenVINO-CSharp-API-Samples
static void yolov8_obb(string model_path, string image_path, string device)
{
DateTime start = DateTime.Now;
// -------- Step 1. Initialize OpenVINO Runtime Core --------
Core core = new Core();
DateTime end = DateTime.Now;
Slog.INFO("1. Initialize OpenVINO Runtime Core success, time spend: " + (end - start).TotalMillisecond******s.");
// -------- Step 2. Read inference model --------
start = DateTime.Now;
Model model = core.read_model(model_path);
end = DateTime.Now;
Slog.INFO("2. Read inference model success, time spend: " + (end - start).TotalMillisecond******s.");
OvExtensions.printf_model_info(model);
// -------- Step 3. Loading a model to the device --------
start = DateTime.Now;
CompiledModel compiled_model = core.compile_model(model, device);
end = DateTime.Now;
Slog.INFO("3. Loading a model to the device success, time spend:" + (end - start).TotalMillisecond******s.");
// -------- Step 4. Create an infer request --------
start = DateTime.Now;
InferRequest infer_request = compiled_model.create_infer_request();
end = DateTime.Now;
Slog.INFO("4. Create an infer request success, time spend:" + (end - start).TotalMillisecond******s.");
// -------- Step 5. Process input images --------
start = DateTime.Now;
Mat image = new Mat(image_path); // Read image by opencvsharp
int max_image_length = image.Cols > image.Rows ? image.Cols : image.Rows;
Mat max_image = Mat.Zeros(new OpenCvSharp.Size(max_image_length, max_image_length), MatType.CV_8UC3);
Rect roi = new Rect(0, 0, image.Cols, image.Rows);
image.CopyTo(new Mat(max_image, roi));
float factor = (float)(max_image_length / 1024.0);
end = DateTime.Now;
Slog.INFO("5. Process input images success, time spend:" + (end - start).TotalMillisecond******s.");
// -------- Step 6. Set up input data --------
start = DateTime.Now;
Tensor input_tensor = infer_request.get_input_tensor();
Shape input_shape = input_tensor.get_shape();
Mat input_mat = CvDnn.BlobFromImage(max_image, 1.0 / 255.0, new OpenCvSharp.Size(input_shape[2], input_shape[3]), 0, true, false);
float[] input_data = new float[input_shape[1] * input_shape[2] * input_shape[3]];
Marshal.Copy(input_mat.Ptr(0), input_data, 0, input_data.Length);
input_tensor.set_data(input_data);
end = DateTime.Now;
Slog.INFO("6. Set up input data success, time spend:" + (end - start).TotalMillisecond******s.");
// -------- Step 7. Do inference synchronously --------
infer_request.infer();
start = DateTime.Now;
infer_request.infer();
end = DateTime.Now;
Slog.INFO("7. Do inference synchronously success, time spend:" + (end - start).TotalMillisecond******s.");
// -------- Step 8. Get infer result data --------
start = DateTime.Now;
Tensor output_tensor = infer_request.get_output_tensor();
int output_length = (int)output_tensor.get_size();
float[] output_data = output_tensor.get_data(output_length);
end = DateTime.Now;
Slog.INFO("8. Get infer result data success, time spend:" + (end - start).TotalMillisecond******s.");
// -------- Step 9. Process reault --------
start = DateTime.Now;
Mat result_data = new Mat(20, 21504, MatType.CV_32F, output_data);
result_data = result_data.T();
float[] d = new float[output_length];
result_data.GetArray(out d);
// Storage results list
List position_boxes = new List();
List class_ids = new List();
List confidences = new List();
List rotations = new List();
// Preprocessing output results
for (int i = 0; i < result_data.Rows; i++)
{
Mat classes_scores = new Mat(result_data, new Rect(4, i, 15, 1));
OpenCvSharp.Point max_classId_point, min_classId_point;
double max_score, min_score;
// Obtain the maximum value and its position in a set of data
Cv2.MinMaxLoc(classes_scores, out min_score, out max_score,
out min_classId_point, out max_classId_point);
// Confidence level between 0 ~ 1
// Obtain identification box information
if (max_score > 0.25)
{
float cx = result_data.At(i, 0);
float cy = result_data.At(i, 1);
float ow = result_data.At(i, 2);
float oh = result_data.At(i, 3);
double x = (cx - 0.5 * ow) * factor;
double y = (cy - 0.5 * oh) * factor;
double width = ow * factor;
double height = oh * factor;
Rect2d box = new Rect2d();
box.X = x;
box.Y = y;
box.Width = width;
box.Height = height;
position_boxes.Add(box);
class_ids.Add(max_classId_point.X);
confidences.Add((float)max_score);
rotations.Add(result_data.At(i, 19));
}
}
// NMS non maximum suppression
int[] indexes = new int[position_boxes.Count];
CvDnn.NMSBoxes(position_boxes, confidences, 0.25f, 0.7f, out indexes);
List rotated_rects = new List();
for (int i = 0; i < indexes.Length; i++)
{
int index = indexes[i];
float w = (float)position_boxes[index].Width;
float h = (float)position_boxes[index].Height;
float x = (float)position_boxes[index].X + w / 2;
float y = (float)position_boxes[index].Y + h / 2;
float r = rotations[index];
float w_ = w > h ? w : h;
float h_ = w > h ? h : w;
r = (float)((w > h ? r : (float)(r + Math.PI / 2)) % Math.PI);
RotatedRect rotate = new RotatedRect(new Point2f(x, y), new Size2f(w_, h_), (float)(r * 180.0 / Math.PI));
rotated_rects.Add(rotate);
}
end = DateTime.Now;
Slog.INFO("9. Process reault success, time spend:" + (end - start).TotalMillisecond******s.");
for (int i = 0; i < indexes.Length; i++)
{
int index = indexes[i];
Point2f[] points = rotated_rects[i].Points();
for (int j = 0; j < 4; j++)
{
Cv2.Line(image, (Point)points[j], (Point)points[(j + 1) % 4], new Scalar(255, 100, 200), 2);
}
//Cv2.Rectangle(image, new OpenCvSharp.Point(position_boxes[index].TopLeft.X, position_boxes[index].TopLeft.Y + 30),
// new OpenCvSharp.Point(position_boxes[index].BottomRight.X, position_boxes[index].TopLeft.Y), new Scalar(0, 255, 255), -1);
Cv2.PutText(image, class_lables[class_ids[index]] + "-" + confidences[index].ToString("0.00"),
(Point)points[0], HersheyFonts.HersheySimplex, 0.8, new Scalar(0, 0, 0), 2);
}
string output_path = Path.Combine(Path.GetDirectoryName(Path.GetFullPath(image_path)),
Path.GetFileNameWithoutExtension(image_path) + "_result.jpg");
Cv2.ImWrite(output_path, image);
Slog.INFO("The result save to " + output_path);
Cv2.ImShow("Result", image);
Cv2.WaitKey(0);
}
项目推理结果图(由于工业上面项目上的图片涉及到客户保密固使用自己另一个验证码项目的图片代替。请谅解)
在该项目中,我们使用哪吒开发板套件能快速推理工业上视觉图片