|
IF YOU ARE HAVING A PROBLEM
- Take a look at the logs in
C:\Program Files\CodeProject\AI\logs and see if there's anything in there that screams 'something broke'.
- Check the FAQs in the CodeProject.AI Server documentation
- Make sure you've tested the server using the Explorer (blue link, top middle of the dashboard) to ensure it's a server issue rather than something else such as Blue Iris or another app using CodeProject.AI server.
- If there's no obvious answer, then copy and paste into a message the contents of the System Info tab, describe what you are doing, and what you see, and what you would expect.
Always include a copy and paste from the System Info tab of the dashboard. It gives us a ton of info on your setup. If an individual module is failing, click the 'Info' button to the right of the module's name in the status list and copy and paste that info too.
How to reinstall a module
Option 1. Go to the install modules tab on the dashboard and try re-installing the package. Make sure you have enough disk space and a reliable internet connection.
Option 2: (Option 1 with a vengeance): If that fails, head to the module's folder ([app root]\modules\module-id), open a terminal in admin mode, and run ..\..\setup . This will force a manual reinstall using the install script.
Docker: In Docker you will need to open a terminal into the docker container. You can do this using Docker Desktop, or Visual Studio Code with the Docker remote extension, or on the command using using docker attach . Then do a cd /app/modules/module-id where module-id is the id of the module you need to resinstall. Next, run sudo bash ../../setup.sh --verbosity info to force a manual reinstall of that module. (Set verbosity as quiet, info or loud to get less or more info)
cheers
Chris Maunder
modified 18-Feb-24 15:48pm.
|
|
|
|
|
If you are a Blue Iris user and you are using custom models, then you would notice that the option, in Blue Iris, to set the custom model location is greyed out. This is because Blue Iris does not currently make changes to CodeProject.AI Server's settings. It can be done by manually starting CodeProject.AI with command line parameters (not a great solution), or editing the module settings files (a little messy), or setting system-wide environment variables (way easier). For version 1.6 we added an API to allow any app to change our settings programmatically, and we take care of stopping/restarting things and persisting the changes.
So: Blue Iris doesn't currently change CodeProject.AI Server's settings, so it doesn't provide you a way to change the custom model folder location from within Blue Iris.
Blue Iris will still use the contents of this folder to determine the calls it makes. If you don't specify a model to use in the Custom Models textbox, then Blue Iris will use all models in the custom models folder that it knows about.
Here we've specified a specific model to use. The Blue Iris help file explains more about how this works, including inclusive and exclusive filters on the models it finds.
CodeProject.AI Server doesn't know about Blue Iris' folder, so it can't tell what models it may be expected to use, nor can it tell Blue Iris about what models CodeProject.AI server has available. Our API allows Blue Iris to get a list of the AI models installed with CodeProject.AI Server, and also to set the folder where these models reside. But Blue Iris doesn't, yet, use that API.
So we do a hack.
At install time we sniff the registry to find where Blue Iris thinks the custom models should be. We then make empty copies of the models that we have, and copy them into that folder. If the folder doesn't exist (eg you were using C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\assets , which no longer exists) then we create that folder, and then copy over the empty files.
When Blue Iris looks in that folder to decide what custom calls it can make, it sees the models, notes their names, and uses those names in the calls. CodeProject.AI Server has those models, so when the calls come through we can process them.
Blue Iris doesn't use the models. It uses the list of model names.
If you have your own models in the Blue Iris folder
You will need to copy them to the CodeProject.AI server's custom model folder (by default this is C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models )
If you've modified the registry and have your own custom models
If you were using a folder in C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\ (which no longer existed after the upgrade, but was recreated by our hack) you'll need to re-copy your custom model into that folder.
The simplest solutions are:
- Modify the registry (Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software\Blue Iris\Options\AI, key 'deepstack_custompath') so Blue Iris looks in
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models for custom models, and copy your models into there.
or
- Modify
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\modulesettings.json file and set CUSTOM_MODELS_DIR to be whatever Blue Iris thinks the custom model folder is.
cheers
Chris Maunder
|
|
|
|
|
System info:
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz (Intel)
1 CPU x 4 cores. 4 logical processors (x64)
GPU (Primary): Intel(R) HD Graphics 630 (1,024 MiB) (Intel Corporation)
Driver: 31.0.101.2111
System RAM: 16 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.5
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Intel(R) HD Graphics 630:
Driver Version 31.0.101.2111
Video Processor Intel(R) HD Graphics Family
System GPU info:
GPU 3D Usage 4%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
I am assuming that I need to use the Default Objects setting in Blue Iris since there are no real models that reduce the number of objects that are found using the Coral module.
Blue Iris AI Settings:
Blue Iris Camera Settings:
Are there any recommended settings for the Coral Module that I can use to improve the accuracy of the results from the Coral Module?
Using what I assume are the default settings for the Coral module, I get this in Blue Iris.
The Coral module seems to return lots and lots of "persons". (Plus an Airplane?) Is there any Coral Model that is more accurate? I have changed the model size from small to medium, but this doesn't seem to have any affect.
I keep going back to the YOLOv5 .Net module, which is slower, but seems to be more accurate using the ipcam-* models.
Coral Module Info:
Module 'Object Detection (Coral)' 2.2.2 (ID: ObjectDetectionCoral)
Valid: True
Module Path: <root>\modules\ObjectDetectionCoral
Module Location: Internal
AutoStart: True
Queue: objectdetection_queue
Runtime: python3.9
Runtime Location: Local
FilePath: objectdetection_coral_adapter.py
Start pause: 1 sec
Parallelism: 16
LogVerbosity:
Platforms: all
GPU Libraries: installed if available
GPU: use if supported
Accelerator:
Half Precision: enable
Environment Variables
CPAI_CORAL_MODEL_NAME = MobileNet SSD
CPAI_CORAL_MULTI_TPU = True
MODELS_DIR = <root>\modules\ObjectDetectionCoral\assets
MODEL_SIZE = medium
Status Data: {
"inferenceDevice": "Multi-TPU",
"inferenceLibrary": "TF-Lite",
"canUseGPU": "false",
"successfulInferences": 297,
"failedInferences": 7,
"numInferences": 304,
"averageInferenceMs": 13.447811447811448
}
Started: 12 Jun 2024 9:08:09 AM Central Standard Time
LastSeen: 12 Jun 2024 9:19:16 AM Central Standard Time
Status: Stopped
Requests: 304 (includes status calls)
Installation Log
2024-06-06 08:00:39: Installing CodeProject.AI Analysis Module
2024-06-06 08:00:39: ======================================================================
2024-06-06 08:00:39: CodeProject.AI Installer
2024-06-06 08:00:39: ======================================================================
2024-06-06 08:00:39: 285.7Gb of 487Gb available on
2024-06-06 08:00:39: General CodeProject.AI setup
2024-06-06 08:00:39: Creating Directories...Done
2024-06-06 08:00:39: GPU support
2024-06-06 08:00:40: CUDA Present...No
2024-06-06 08:00:40: ROCm Present...No
2024-06-06 08:00:42: Reading ObjectDetectionCoral settings.......Done
2024-06-06 08:00:42: Installing module Object Detection (Coral) 2.1.6
2024-06-06 08:00:42: Installing Python 3.9
2024-06-06 08:00:49: Downloading Python 3.9 interpreter...Expanding...Done.
2024-06-06 08:00:57: Creating Virtual Environment (Local)...Done
2024-06-06 08:00:58: Confirming we have Python 3.9 in our virtual environment...present
2024-06-06 08:01:01: Downloading edge TPU runtime...Expanding...Done.
2024-06-06 08:01:01: Copying contents of edgetpu_runtime-20221024.zip to edgetpu_runtime...done
2024-06-06 08:01:01: Installing the edge TPU libraries...
2024-06-06 08:01:01: Installing UsbDk
2024-06-06 08:01:02: Installing Windows drivers
2024-06-06 08:01:02: Microsoft PnP Utility
2024-06-06 08:01:02: Adding driver package: coral.inf
2024-06-06 08:01:02: Driver package added successfully. (Already exists in the system)
2024-06-06 08:01:02: Published Name: oem40.inf
2024-06-06 08:01:02: Driver package is up-to-date on device: PCI\VEN_1AC1&DEV_089A&SUBSYS_089A1AC1&REV_00\4&6ba732e&0&00D8
2024-06-06 08:01:02: Adding driver package: Coral_USB_Accelerator.inf
2024-06-06 08:01:02: Driver package added successfully. (Already exists in the system)
2024-06-06 08:01:02: Published Name: oem43.inf
2024-06-06 08:01:02: Driver package installed on device: USB\VID_18D1&PID_9302\5&220fb38b&0&10
2024-06-06 08:01:02: Adding driver package: Coral_USB_Accelerator_(DFU).inf
2024-06-06 08:01:02: Driver package added successfully. (Already exists in the system)
2024-06-06 08:01:02: Published Name: oem55.inf
2024-06-06 08:01:02: Total driver packages: 3
2024-06-06 08:01:02: Added driver packages: 2
2024-06-06 08:01:02: Installing performance counters
2024-06-06 08:01:02: Info: Provider {aaa5bf9e-c44b-4177-af65-d3a06ba45fe7} defined in C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\edgetpu_runtime\third_party\coral_accelerator_windows\coral.man is already installed in system repository.
2024-06-06 08:01:02: Info: Successfully installed performance counters in C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\edgetpu_runtime\third_party\coral_accelerator_windows\coral.manCopying edgetpu and libusb to System32
2024-06-06 08:01:02: 1 file(s) copied.
2024-06-06 08:01:02: 1 file(s) copied.
2024-06-06 08:01:02: Install complete
2024-06-06 08:01:10: Downloading EfficientDet (large) models...Expanding...Done.
2024-06-06 08:01:10: Copying contents of objectdetection-efficientdet-large-edgetpu.zip to assets...done
2024-06-06 08:01:14: Downloading EfficientDet (medium) models...Expanding...Done.
2024-06-06 08:01:14: Copying contents of objectdetection-efficientdet-medium-edgetpu.zip to assets...done
2024-06-06 08:01:18: Downloading EfficientDet (small) models...Expanding...Done.
2024-06-06 08:01:19: Copying contents of objectdetection-efficientdet-small-edgetpu.zip to assets...done
2024-06-06 08:01:21: Downloading EfficientDet (tiny) models...Expanding...Done.
2024-06-06 08:01:21: Copying contents of objectdetection-efficientdet-tiny-edgetpu.zip to assets...done
2024-06-06 08:01:34: Downloading MobileNet (large) models...Expanding...Done.
2024-06-06 08:01:34: Copying contents of objectdetection-mobilenet-large-edgetpu.zip to assets...done
2024-06-06 08:01:37: Downloading MobileNet (medium) models...Expanding...Done.
2024-06-06 08:01:37: Copying contents of objectdetection-mobilenet-medium-edgetpu.zip to assets...done
2024-06-06 08:01:40: Downloading MobileNet (small) models...Expanding...Done.
2024-06-06 08:01:40: Copying contents of objectdetection-mobilenet-small-edgetpu.zip to assets...done
2024-06-06 08:01:43: Downloading MobileNet (tiny) models...Expanding...Done.
2024-06-06 08:01:43: Copying contents of objectdetection-mobilenet-tiny-edgetpu.zip to assets...done
2024-06-06 08:01:58: Downloading YOLOv8 (large) models...Expanding...Done.
2024-06-06 08:01:58: Copying contents of objectdetection-yolov8-large-edgetpu.zip to assets...done
2024-06-06 08:02:10: Downloading YOLOv8 (medium) models...Expanding...Done.
2024-06-06 08:02:13: Copying contents of objectdetection-yolov8-medium-edgetpu.zip to assets...done
2024-06-06 08:02:21: Downloading YOLOv8 (small) models...Expanding...Done.
2024-06-06 08:02:21: Copying contents of objectdetection-yolov8-small-edgetpu.zip to assets...done
2024-06-06 08:02:28: Downloading YOLOv8 (tiny) models...Expanding...Done.
2024-06-06 08:02:29: Copying contents of objectdetection-yolov8-tiny-edgetpu.zip to assets...done
2024-06-06 08:02:29: Installing Python packages for Object Detection (Coral)
2024-06-06 08:02:29: [0;Installing GPU-enabled libraries: If available
2024-06-06 08:02:34: Ensuring Python package manager (pip) is installed...Done
2024-06-06 08:02:44: Ensuring Python package manager (pip) is up to date...Done
2024-06-06 08:02:44: Python packages specified by requirements.windows.txt
2024-06-06 08:02:48: - Installing Pillow, a Python Image Library...(checked) Done
2024-06-06 08:02:54: - Installing Tensorflow Lite...(checked) Done
2024-06-06 08:03:02: - Installing PyCoral...(checked) Done
2024-06-06 08:03:21: - Installing NumPy, a package for scientific computing...Already installed
2024-06-06 08:03:21: Installing Python packages for the CodeProject.AI Server SDK
2024-06-06 08:03:23: Ensuring Python package manager (pip) is installed...Done
2024-06-06 08:03:25: Ensuring Python package manager (pip) is up to date...Done
2024-06-06 08:03:25: Python packages specified by requirements.txt
2024-06-06 08:03:26: - Installing Pillow, a Python Image Library...Already installed
2024-06-06 08:03:29: - Installing Charset normalizer...(checked) Done
2024-06-06 08:03:33: - Installing aiohttp, the Async IO HTTP library...(checked) Done
2024-06-06 08:03:35: - Installing aiofiles, the Async IO Files library...(checked) Done
2024-06-06 08:03:38: - Installing py-cpuinfo to allow us to query CPU info...(checked) Done
2024-06-06 08:03:41: - Installing Requests, the HTTP library...(checked) Done
2024-06-06 08:03:48: Self test: Self-test passed
2024-06-06 08:03:48: Module setup time 00:03:07.86
2024-06-06 08:03:48: Setup complete
2024-06-06 08:03:48: Total setup time 00:03:09.13
Installer exited with code 0
2024-06-10 17:22:48: Installing CodeProject.AI Analysis Module
2024-06-10 17:22:49: ======================================================================
2024-06-10 17:22:49: CodeProject.AI Installer
2024-06-10 17:22:49: ======================================================================
2024-06-10 17:22:50: 289.9Gb of 487Gb available on
2024-06-10 17:22:50: General CodeProject.AI setup
2024-06-10 17:22:50: Creating Directories...done
2024-06-10 17:22:50: GPU support
2024-06-10 17:22:51: CUDA Present...No
2024-06-10 17:22:51: ROCm Present...No
2024-06-10 17:22:51: Checking for .NET 7.0...Checking SDKs...Upgrading: .NET is 0
2024-06-10 17:22:51: Current version is 0. Installing newer version.
2024-06-10 17:22:51: 'winget' is not recognized as an internal or external command,
2024-06-10 17:22:51: operable program or batch file.
2024-06-10 17:22:54: Reading ObjectDetectionCoral settings.......done
2024-06-10 17:22:54: Installing module Object Detection (Coral) 2.2.2
2024-06-10 17:22:55: Installing Python 3.9
2024-06-10 17:22:55: Python 3.9 is already installed
2024-06-10 17:22:56: Creating Virtual Environment (Local)...Virtual Environment already present
2024-06-10 17:22:56: Confirming we have Python 3.9 in our virtual environment...present
2024-06-10 17:22:56: Installing the edge TPU libraries...
2024-06-10 17:22:56: Installing UsbDk
2024-06-10 17:22:57: Installing Windows drivers
2024-06-10 17:22:57: Microsoft PnP Utility
2024-06-10 17:22:57: Adding driver package: coral.inf
2024-06-10 17:22:57: Driver package added successfully. (Already exists in the system)
2024-06-10 17:22:57: Published Name: oem40.inf
2024-06-10 17:22:57: Driver package is up-to-date on device: PCI\VEN_1AC1&DEV_089A&SUBSYS_089A1AC1&REV_00\4&6ba732e&0&00D8
2024-06-10 17:22:57: Adding driver package: Coral_USB_Accelerator.inf
2024-06-10 17:22:57: Driver package added successfully. (Already exists in the system)
2024-06-10 17:22:57: Published Name: oem43.inf
2024-06-10 17:22:57: Driver package installed on device: USB\VID_18D1&PID_9302\5&220fb38b&0&10
2024-06-10 17:22:57: Adding driver package: Coral_USB_Accelerator_(DFU).inf
2024-06-10 17:22:57: Driver package added successfully. (Already exists in the system)
2024-06-10 17:22:57: Published Name: oem55.inf
2024-06-10 17:22:57: Total driver packages: 3
2024-06-10 17:22:57: Added driver packages: 2
2024-06-10 17:22:57: Installing performance counters
2024-06-10 17:22:57: Info: Provider {aaa5bf9e-c44b-4177-af65-d3a06ba45fe7} defined in C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\edgetpu_runtime\third_party\coral_accelerator_windows\coral.man is already installed in system repository.
2024-06-10 17:22:57: Info: Successfully installed performance counters in C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\edgetpu_runtime\third_party\coral_accelerator_windows\coral.manCopying edgetpu and libusb to System32
2024-06-10 17:22:57: 1 file(s) copied.
2024-06-10 17:22:57: 1 file(s) copied.
2024-06-10 17:22:57: Install complete
2024-06-10 17:23:07: Downloading EfficientDet (large) models...Expanding...done.
2024-06-10 17:23:07: Copying contents of objectdetection-efficientdet-large-edgetpu.zip to assets...done
2024-06-10 17:23:14: Downloading EfficientDet (medium) models...Expanding...done.
2024-06-10 17:23:14: Copying contents of objectdetection-efficientdet-medium-edgetpu.zip to assets...done
2024-06-10 17:23:21: Downloading EfficientDet (small) models...Expanding...done.
2024-06-10 17:23:21: Copying contents of objectdetection-efficientdet-small-edgetpu.zip to assets...done
2024-06-10 17:23:26: Downloading EfficientDet (tiny) models...Expanding...done.
2024-06-10 17:23:26: Copying contents of objectdetection-efficientdet-tiny-edgetpu.zip to assets...done
2024-06-10 17:23:44: Downloading MobileNet (large) models...Expanding...done.
2024-06-10 17:23:44: Copying contents of objectdetection-mobilenet-large-edgetpu.zip to assets...done
2024-06-10 17:23:47: Downloading MobileNet (medium) models...Expanding...done.
2024-06-10 17:23:47: Copying contents of objectdetection-mobilenet-medium-edgetpu.zip to assets...done
2024-06-10 17:23:51: Downloading MobileNet (small) models...Expanding...done.
2024-06-10 17:23:51: Copying contents of objectdetection-mobilenet-small-edgetpu.zip to assets...done
2024-06-10 17:23:55: Downloading MobileNet (tiny) models...Expanding...done.
2024-06-10 17:23:55: Copying contents of objectdetection-mobilenet-tiny-edgetpu.zip to assets...done
2024-06-10 17:24:15: Downloading YOLOv8 (large) models...Expanding...done.
2024-06-10 17:24:15: Copying contents of objectdetection-yolov8-large-edgetpu.zip to assets...done
2024-06-10 17:24:30: Downloading YOLOv8 (medium) models...Expanding...done.
2024-06-10 17:24:30: Copying contents of objectdetection-yolov8-medium-edgetpu.zip to assets...done
2024-06-10 17:24:34: Downloading YOLOv8 (small) models...Expanding...done.
2024-06-10 17:24:34: Copying contents of objectdetection-yolov8-small-edgetpu.zip to assets...done
2024-06-10 17:24:35: Downloading YOLOv8 (tiny) models...Expanding...done.
2024-06-10 17:24:35: Copying contents of objectdetection-yolov8-tiny-edgetpu.zip to assets...done
2024-06-10 17:24:35: Installing Python packages for Object Detection (Coral)
2024-06-10 17:24:35: [0;Installing GPU-enabled libraries: If available
2024-06-10 17:24:37: Ensuring Python package manager (pip) is installed...done
2024-06-10 17:24:40: Ensuring Python package manager (pip) is up to date...done
2024-06-10 17:24:40: Python packages specified by requirements.windows.txt
2024-06-10 17:24:41: - Installing Pillow, a Python Image Library...Already installed
2024-06-10 17:24:42: - Installing Tensorflow Lite...Already installed
2024-06-10 17:24:43: - Installing PyCoral...Already installed
2024-06-10 17:25:05: - Installing NumPy, a package for scientific computing...Already installed
2024-06-10 17:25:05: Installing Python packages for the CodeProject.AI Server SDK
2024-06-10 17:25:06: Ensuring Python package manager (pip) is installed...done
2024-06-10 17:25:09: Ensuring Python package manager (pip) is up to date...done
2024-06-10 17:25:09: Python packages specified by requirements.txt
2024-06-10 17:25:10: - Installing Pillow, a Python Image Library...Already installed
2024-06-10 17:25:11: - Installing Charset normalizer...Already installed
2024-06-10 17:25:12: - Installing aiohttp, the Async IO HTTP library...Already installed
2024-06-10 17:25:13: - Installing aiofiles, the Async IO Files library...Already installed
2024-06-10 17:25:14: - Installing py-cpuinfo to allow us to query CPU info...Already installed
2024-06-10 17:25:15: - Installing Requests, the HTTP library...Already installed
2024-06-10 17:25:16: Scanning modulesettings for downloadable models...Processing model list
2024-06-10 17:25:17: Downloading MobileNet Large...already exists...Expanding...done.
2024-06-10 17:25:17: Copying contents of objectdetection-mobilenet-large-edgetpu.zip to assets...done
2024-06-10 17:25:19: Downloading MobileNet Medium...already exists...Expanding...done.
2024-06-10 17:25:19: Copying contents of objectdetection-mobilenet-medium-edgetpu.zip to assets...done
2024-06-10 17:25:20: Downloading MobileNet Small...already exists...Expanding...done.
2024-06-10 17:25:20: Copying contents of objectdetection-mobilenet-small-edgetpu.zip to assets...done
2024-06-10 17:25:22: Downloading MobileNet Tiny...already exists...Expanding...done.
2024-06-10 17:25:22: Copying contents of objectdetection-mobilenet-tiny-edgetpu.zip to assets...done
2024-06-10 17:25:36: Self test: Self-test passed
2024-06-10 17:25:36: Module setup time 00:02:44.52
2024-06-10 17:25:36: Setup complete
2024-06-10 17:25:36: Total setup time 00:02:46.62
Installer exited with code 0
|
|
|
|
|
Someone else will need to get you the correct settings, but I suspect that yours are wrong because ‘IPcam-combined’ isn’t currently supported for Coral. Try using the default model first instead of a custom one. Also, your confidence threshold is low.
Edit: also, are you able to switch to YOLOv8 from MobileNet SSD?
|
|
|
|
|
Maybe I didn't explain completely.
I go back to YOLOv5 .NET because I get more accurate results with the YOLOv5.NET WITH the ipcam-* models.
I know those models are not available with the Coral modules.
That's why I use the Default Objects settings.
|
|
|
|
|
Hi,
Following upgrade to CPAI 2.6.5 on Ubuntu I have seen the Coral module crash several times, with the UI showing "Lost Contact" with the module and the following log messages:
09:02:11:Response rec'd from Object Detection (Coral) command 'detect' (...ee25ac) [''] took 21ms
09:03:17:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:03:18:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:04:13:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:04:15:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:04:17:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:04:17:objectdetection_coral_adapter.py: WARNING:root:Pipe thread didn't join!
and again a few hours later:
11:48:52:Response rec'd from Object Detection (Coral) command 'detect' (...f1837e) [''] took 19ms
11:50:22:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
11:51:22:objectdetection_coral_adapter.py: WARNING:root:Pipe thread didn't join!
I have a second instance that is still on 2.6.2 which has not exhibited this behaviour, so assume this to be an issue with the new code.
modified yesterday.
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
sure..
Server version: 2.6.5
System: Linux
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Xeon(R) CPU E3-1268L v3 @ 2.30GHz (Intel)
4 CPUs x 1 core. 1 logical processors (x64)
System RAM: 2 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: 7.0.119
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
VMware SVGA II Adapter:
Driver Version
Video Processor
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
and from the other system thats not exhibiting the same behaviour:
Server version: 2.6.2
System: Linux
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Xeon(R) CPU E3-1268L v3 @ 2.30GHz (Intel)
4 CPUs x 1 core. 1 logical processors (x64)
System RAM: 2 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: 7.0.119
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
VMware SVGA II Adapter:
Driver Version
Video Processor
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
Cool project. Thank you.
I've deployed it in an LXC under Proxmox, and another LXC with the full development project. FWIW I also have it running on an RPI 5 with NVME drive and a Coral TPU deployed to docker with the dockge interface.
I've successfully run the "Optical Character Recognition" against images of road signs and of a technical manual in .pdf form. Works very well.
For grins I ran the OCR against an image of my terrible cursive handwriting. Of three paragraphs it got two thirds of the Date correct as I had written it as "month d, yyyy" The remainder of the sample was gibberish.
In my investegations I've come across, among others;
* Transkribus[^]
* Pen2text.com
Pen2text blew me away in the ease to making a simple test on that same sample of my, horribly illegible, handwriting. It missed two words that frankly looked like a leaky pen.
At any rate for either of these projects I feel I'd have to hire legal representation to understand the ownership and use of the OCR'ed sources and results.
I'd sure appreciate any tips on open source, self hosted, trainable OCR software suitable for a collection of perhaps fifty cursive multipage letters written in the same hand six or more decades ago. Once processed the text would be fed to a model to allow for chat of that subject matter.
Bonus points for pointers to open source archival platforms to organize the letters, that has an api so that I could correlate the OCRed text to the collection of images. Why reinvent that, archiving, wheel so to speak.
Thanks for listening by way or your reading.
Jeff
KF7CRU @jhalbrecht
|
|
|
|
|
Since upgrading to 2.6.5 I get AI not responding and no detections. I have to enable the service to start with Blue Iris after every reboot. Also changing from enable GPU or disabling GPU doesn't seem to change anything. I am using an Intel CPU with integrated GPU and used to be able to select enable GPU. Also the Codeproject AI status does not indicate Direct ML even after several detections. I am using Yolo.Net. Please advise? Everything I mentioned seemed to work fine with 2.6.2.
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
One thing, I recommend going to Blue Iris main AI settings and unchecking auto stop/start.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Here it is
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i7-6700T CPU @ 2.80GHz (Intel)
1 CPU x 4 cores. 8 logical processors (x64)
GPU (Primary): Intel(R) HD Graphics 530 (1,024 MiB) (Intel Corporation)
Driver: 31.0.101.2111
System RAM: 16 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.20
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Intel(R) HD Graphics 530:
Driver Version 31.0.101.2111
Video Processor Intel(R) HD Graphics Family
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
I unchecked Start/Stop in Blue Iris. Restarted Blue Iris PC. I have to manually start Codeproject in Codeproject Dashboard but at least I now see Direct ML after I do this. Only problem is that Custom Models are not displaying in AI Main settings. If I hit the three dots, I get the Refresh AI to display models. Looks like the custom models are not loading for some reason. How would I refresh AI? Please advise?
|
|
|
|
|
Hi,
Since the install of the latest version of CP on my Blue Iris server, I get the message "Alert Cancelled AI not responding".
Consequently I do not get any notification on my phone when someone triggers a camera because there is no analysis done.
I reinstalled CP, no success.
Any suggestion on whatto do next?
I am on Windows 10
Thanks,
Michel.
|
|
|
|
|
It works now
I uninstalled CP, then I used the software Everything from Voidtools to find every single file left that had CodeProject in it's name and deleted them all, then reinstalled CP and restarted the server and it works as it should.
Thanks
|
|
|
|
|
Issue exists only with newer CPAI builds, occurs several times a day on different hardware (Intel with Tesla P4 vs Ryzen with RTX3090)
19:00:49:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect.py", line 141, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 74, in forward
xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
RuntimeError: The size of tensor a (48) must match the size of tensor b (60) at non-singleton dimension 2
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.17763)
CPUs: Intel(R) Xeon(R) CPU E5-2699A v4 @ 2.40GHz (Intel)
1 CPU x 11 cores. 22 logical processors (x64)
GPU (Primary): Tesla P4 (8 GiB) (NVIDIA)
Driver: 538.67, CUDA: 12.2.140 (up to: 12.2), Compute: 6.1, cuDNN: 8.5
System RAM: 64 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.10
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Microsoft Hyper-V Video:
Driver Version 10.0.17763.2145
Video Processor
NVIDIA Tesla P4:
Driver Version 31.0.15.3867
Video Processor Tesla P4
System GPU info:
GPU 3D Usage 8%
GPU RAM Usage 6.4 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
===================================================================
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: AMD Ryzen 9 7950X 16-Core Processor (AMD)
1 CPU x 16 cores. 32 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 3090 (24 GiB) (NVIDIA)
Driver: 555.85, CUDA: 12.5.40 (up to: 12.5), Compute: 8.6, cuDNN: 8.5
System RAM: 64 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 8.0.1
.NET SDK: 8.0.101
Default Python: 3.10.6
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
NVIDIA GeForce RTX 3090:
Driver Version 32.0.15.5585
Video Processor NVIDIA GeForce RTX 3090
System GPU info:
GPU 3D Usage 9%
GPU RAM Usage 2.1 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
Are you able to replicate this issue with any specific image? My guess is there's something with the image itself that's unexpected for the YOLO processor.
Another option is to switch to the .NET YOLO module, or the YOLOv8 module and see if that helps.
cheers
Chris Maunder
|
|
|
|
|
Wrong number of channels in the image? (Greyscale?)
|
|
|
|
|
Hello,
Since Raspberry just announced their new "Raspberry Pi AI Kit" ( https://www.raspberrypi.com/products/ai-kit/[^] ) what would the possibility of getting a Hailo AI module natively added to CP.AI? I'm currently using a dual edge Coral TPU on my Pi5 but it has it's limitations
|
|
|
|
|
Yeah I saw that - $70 for some serious power is pretty awesome.
The Hailo stack seems straightforward (though their site leaves something to be desired). Without access to the hardware we can't do anything here, but I'm sure it would be a very straightforward exercise for someone to adapt any of the existing object detection modules to use the Hailo models and TensorRT. The segmentation example, for instance, seems super simple.
cheers
Chris Maunder
|
|
|
|
|
I'm actually getting one in the next few days and can help with anyway possible to get it up and running Just let me know
|
|
|
|
|
This looks very interesting; like the Coral TPU, but more modern. Google's lack of Coral support over the past few years has been concerning to me about the future of the platform. It won't take too many more years for Coral to no longer be competitive. I see it runs YOLOv5m at 640x640 at 218 FPS on their benchmarks page. (I just benchmarked YOLOv8m 640x640 model running at 2.8 FPS on Coral. If you reduce the size to 352x608 it runs 5.3 FPS, which is about as fast as you can get it to go on the Coral.)
I'm definitely interested in how well it works and how well my Coral TPU learnings/code port to it. What model did you order & where did you find it for $70? This is the only M.2 I see actually immediately available (and I don't see _any_ PCIe cards available):
https://eshop.aaeon.com/ai-edge-computing-hailo-8-m2-2280-module.html[^]
I see that they sell a $170 'starter kit' which looks effectively the same as the above card, but I'd need to fill in my work details, which I'm less comfortable with.
Order Hailo-8 Starter Kit | Hailo AI Processing Technology[^]
|
|
|
|
|
I ordered the kit from PiShop at Raspberry Pi AI Kit - PiShop.us[^] . It stated a preorder but shipped last week and will be arriving tomorrow. I'm not sure why it's listed as $85 when other sites like Canakit have it for the actual $70 (Raspberry Pi AI Kit for Pi 5[^] ) . I hope that it will eventually be sold as a standalone chip instead of having to get the hat with it as well, but my guess is that Raspberry purchased them en masse at a discount and is reselling them at the lower price
|
|
|
|
|