|
IF YOU ARE HAVING A PROBLEM
- Take a look at the logs in
C:\Program Files\CodeProject\AI\logs and see if there's anything in there that screams 'something broke'.
- Check the FAQs in the CodeProject.AI Server documentation
- Make sure you've tested the server using the Explorer (blue link, top middle of the dashboard) to ensure it's a server issue rather than something else such as Blue Iris or another app using CodeProject.AI server.
- If there's no obvious answer, then copy and paste into a message the contents of the System Info tab, describe what you are doing, and what you see, and what you would expect.
Always include a copy and paste from the System Info tab of the dashboard. It gives us a ton of info on your setup. If an individual module is failing, click the 'Info' button to the right of the module's name in the status list and copy and paste that info too.
How to reinstall a module
Option 1. Go to the install modules tab on the dashboard and try re-installing the package. Make sure you have enough disk space and a reliable internet connection.
Option 2: (Option 1 with a vengeance): If that fails, head to the module's folder ([app root]\modules\module-id), open a terminal in admin mode, and run ..\..\setup . This will force a manual reinstall using the install script.
Docker: In Docker you will need to open a terminal into the docker container. You can do this using Docker Desktop, or Visual Studio Code with the Docker remote extension, or on the command using using docker attach . Then do a cd /app/modules/module-id where module-id is the id of the module you need to resinstall. Next, run sudo bash ../../setup.sh --verbosity info to force a manual reinstall of that module. (Set verbosity as quiet, info or loud to get less or more info)
cheers
Chris Maunder
modified 18-Feb-24 15:48pm.
|
|
|
|
|
If you are a Blue Iris user and you are using custom models, then you would notice that the option, in Blue Iris, to set the custom model location is greyed out. This is because Blue Iris does not currently make changes to CodeProject.AI Server's settings. It can be done by manually starting CodeProject.AI with command line parameters (not a great solution), or editing the module settings files (a little messy), or setting system-wide environment variables (way easier). For version 1.6 we added an API to allow any app to change our settings programmatically, and we take care of stopping/restarting things and persisting the changes.
So: Blue Iris doesn't currently change CodeProject.AI Server's settings, so it doesn't provide you a way to change the custom model folder location from within Blue Iris.
Blue Iris will still use the contents of this folder to determine the calls it makes. If you don't specify a model to use in the Custom Models textbox, then Blue Iris will use all models in the custom models folder that it knows about.
Here we've specified a specific model to use. The Blue Iris help file explains more about how this works, including inclusive and exclusive filters on the models it finds.
CodeProject.AI Server doesn't know about Blue Iris' folder, so it can't tell what models it may be expected to use, nor can it tell Blue Iris about what models CodeProject.AI server has available. Our API allows Blue Iris to get a list of the AI models installed with CodeProject.AI Server, and also to set the folder where these models reside. But Blue Iris doesn't, yet, use that API.
So we do a hack.
At install time we sniff the registry to find where Blue Iris thinks the custom models should be. We then make empty copies of the models that we have, and copy them into that folder. If the folder doesn't exist (eg you were using C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\assets , which no longer exists) then we create that folder, and then copy over the empty files.
When Blue Iris looks in that folder to decide what custom calls it can make, it sees the models, notes their names, and uses those names in the calls. CodeProject.AI Server has those models, so when the calls come through we can process them.
Blue Iris doesn't use the models. It uses the list of model names.
If you have your own models in the Blue Iris folder
You will need to copy them to the CodeProject.AI server's custom model folder (by default this is C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models )
If you've modified the registry and have your own custom models
If you were using a folder in C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\ (which no longer existed after the upgrade, but was recreated by our hack) you'll need to re-copy your custom model into that folder.
The simplest solutions are:
- Modify the registry (Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software\Blue Iris\Options\AI, key 'deepstack_custompath') so Blue Iris looks in
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models for custom models, and copy your models into there.
or
- Modify
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\modulesettings.json file and set CUSTOM_MODELS_DIR to be whatever Blue Iris thinks the custom model folder is.
cheers
Chris Maunder
|
|
|
|
|
Can someone help troubleshoot this error? The short of it is this happens when I try to use CodeProject with my BI system's GPU (Intel iGPU) this error comes up in the logs:
ObjectDetectionYOLOv5Net.exe: 2024-05-02 18:31:29.4590658 [E:onnxruntime:, inference_session.cc:1799 onnxruntime::InferenceSession::Initialize::::operator ()] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\DmlGraphFusionHelper.cpp(432)\onnxruntime.DLL!00007FFE2712F11B: (caller: 00007FFE270B44E6) Exception(3) tid(2154) 80004005 Unspecified error
If I downgrade back to older versions (I believe it was older than CodeProject AI 2.3.4) I am able to use the GPU. Blue Iris support basically told me to come here for assistance. Any help with this is appreciated!
|
|
|
|
|
Hi,
I recently set up code project to work with Blue Iris. Blue Iris is running on Windows 11 in a Proxmox VM.
I have successfully passed through a coral USB to the VM and CPAI. Everything seems to work fine for about 10 minutes then CPAI seems to revert back to CPU. The coral is still present in the device manager. I've turned off any USB power management in Windows to no avail.
Any Suggestions would be greatly appreciated.
|
|
|
|
|
Forgive me if I missed this being posted somewhere already.
I want to setup a central AI server in our data center. I want our developers to be able to direct their projects to that central server for testing. When I try testing to the machines ip address with port 32168 yields no connection.
http://machine ip:32168/
Seems simple but I'm missing something.
A similar question was posted with no answer. "how to connect a Blue Iris machine to another machine running CodeProject AI?"
This is not a Blue Iris question, just a reference above.
modified 6hrs ago.
|
|
|
|
|
Well, assuming successful installation, that should work.
In my case, 192.168.50.17:32168 connects to a Linux (Debian) system running AI server in a Docker container.
If you are doing an install to Windows, did you use the script?
Did you get any errors during install? Any errors in system logs?
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
Have you opened port 32168 for HTTP (in and out) on your server's firewall (and possibly also on your developer's firewalls?)
cheers
Chris Maunder
|
|
|
|
|
Thank you, I did not take the time to trace this out.
I found a second layer that was blocking the port.
Juggling too many things at once.
Thank you all for the support.
|
|
|
|
|
Is it better to run CodeProject on the same Windows PC Blue Iris is running, or run it on a virtual machine running Docker?
|
|
|
|
|
Probably depends on which machine has the better GPU, if you are using same. We run with CPAI on a VM (Debian with Docker) because the system with the virtual machine has better video card (Linux is a little leaner). You do have to have a virtual host that allows PCI pass-through to use the video card, we run on ESXi. In the earlier days, it seemed to be easier when doing CPAI updates. If our BI system had a better video card, I would run "native".
Just my $0.02.
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
The PC I am using has an older GPU GTX960 and I was told I cannot use CUDA since my GPU does not support it. So I've only been using Yolov5.net.
I have an AMD Ryzen 5700G 8 core with 16GB RAM 3600mhz. Would I benefit of running CP on a virtual machine?
|
|
|
|
|
Again, if the VM is on another PC that has faster performance, it could be of benefit. Keep in mind there is overhead in the networking.
It is pretty easy to set up the VM and do a test. It is only a small configuration change in the BI system once you work out the Docker set up. Then look at the ms alert times.
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
What would be an acceptable speed for alerts?
Just want to know what is considered normal or too slow.
thanks.
|
|
|
|
|
I consider mine mediocre at best, 60-80 msec. But I run an old low memory video card P620, only 2GB.
That seems to do the job, mostly we filter for false alerts due to shadows.
We plan to upgrade it although we only use AI on 5 of 14 cameras.
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
Maybe I'm looking at the Status logs wrong, but mine only shows ms when it doesn't find anything.
When it does detects and triggers, there is no ms at all. Is this normal?
For example,
as you can see, it detected a person 88%, but no ms...then below it found nothing and alert cancelled, but it shows 238ms.
|
|
|
|
|
on the alert page, select "save AI analysis details."
Open the log file. Make sure you select "save to file".
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
This is a Blue Iris issue, send an email to Blue Iris Support describing the issue.
|
|
|
|
|
Hello everyone,
I recently got a M.2 Coral device and successfully ran it using the default settings on CP.AI. I wanted to try some other models, so I attempted to use YoloV8.
I clicked on download model and I very quickly get this:
Preparing to download model 'objectdetection-yolov8-medium-edgetpu.zip' for module ObjectDetectionCoral
Downloading module 'objectdetection-yolov8-medium-edgetpu.zip' to 'C:\Program Files\CodeProject\AI\downloads\modules\ObjectDetectionCoral\objectdetection-yolov8-medium-edgetpu.zip'
(using cached download for 'objectdetection-yolov8-medium-edgetpu.zip')
objectdetection-yolov8-medium-edgetpu.zip has been downloaded and installed.
Since the file was already downloaded in that directory beforehand by a fresh done install, when I have it "attempt to download" it just erases it. It seems that no install is actually done because when I attempt to use the model, I get this error:
objectdetection_coral_adapter.py: ERROR:root:TFLite file C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets\yolov8m__segment_0_of_2_edgetpu.tflite doesn't exist
objectdetection_coral_adapter.py: WARNING:root:Model file not found: [Errno 2] No such file or directory: 'C:\\Program Files\\CodeProject\\AI\\modules\\ObjectDetectionCoral\\assets\\yolov8m__segment_0_of_2_edgetpu.tflite'
objectdetection_coral_adapter.py: WARNING:root:No Coral TPUs found or able to be initialized. Using CPU.
objectdetection_coral_adapter.py: ERROR:root:TFLite file C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets\yolov8m-416_640px.tflite doesn't exist
objectdetection_coral_adapter.py: WARNING:root:Unable to create interpreter for CPU using edgeTPU library: [Errno 2] No such file or directory: 'C:\\Program Files\\CodeProject\\AI\\modules\\ObjectDetectionCoral\\assets\\yolov8m-416_640px.tflite'
objectdetection_coral_adapter.py: TPU detected
Then CP.AI defaults to CPU and the TPU isn't used.
YoloV8 is also not displayed in green in the Download Models drop down menu (which I'm assuming is meaning that it's not installed), just MobileNet Large Medium Small and Tiny
So it seems that CP.AI isn't actually installing the model, and there is in fact no yolov8m__segment_0_of_2_edgetpu.tflite file in any directory or .zip
|
|
|
|
|
|
My setup is with BI5.
windows 11 vm with BI5, zabbix, qemu-agent client.
the web interface after install give me an error at the right.
Unable to check for updates
try reinstalling .net..
uninstall CPIA
delete folder in program file and program data.
i check in the runtime folder and nothing inside too.
try disable some feature in windows firewall and window defender.
not sure what I can do to fix this issue.
|
|
|
|
|
just delete machine.config and web.config from framework folder.
seem to work now
|
|
|
|
|
I got a USB coral today and have spent hours trying to get it working properly. Ive installed object detection (coral) 2.2.2 and have tried every model and every size. the coral itself works and is detecting, although its buggy AF, it can be reliably crashed by changing models in the explorer.. sometimes stopping and starting again fixes it but sometimes a reboot is needed.
the issue is there only seems to be 1 object returned at a time, and its always a car or a train or a plane and not a person, depending on the model and size used.. ive had almost everything except person! its really frustrating. if there is nothing but a person it will detect them. and its really inaccurate compared to YOLOv5 6.2. hand selecting images with a very very clear person in them will return person.
the YOLOv5 6.2 works flawlessly and returns everything in the image (normally double the detected objects) so person is detected and triggers the alert. ive tested multiple images and the person is detected 100% of the time. and a car is always a car not a train or plane.
i must be missing something.. why does the coral not return everything in the picture? and why is it really bad at detecting compared to the other model?
the time is definitely better, 89ms round trip on the coral vs 198ms on YOLOv5 6.2 on a CUDA GPU
For now i have to stick with the GPU and YOLOv5... hopefully the coral wasn't a total waste of money
|
|
|
|
|
For what it's worth, I haven't heard of this problem before. (USB being flakey is, however, definitely common.) I'm not sure why you'd be seeing such random results. This sounds like a problem with interpreting results and labels. What model(s) are you using? Do some models work better than others? Are you able to see the scores returned by the object detection and thresholds used? Do you have the same results when you send the images directly into the CPAI interface?
|
|
|
|
|
Ive tried with static images using the explorer, and using YOLOv5 and YOLOv8 and Coral. and the live overlay on BI has exactly the same detections.
I have 3 images from my garage camera, all are in greyscale/nighttime mode and are 2560x1920 24bit images. the camera is roof mounted in the corner and looks over the 2 cars to the door on the opposite side of the carport, the cars are static in a 3 images. img629 i am behind the 2 cars with the upper half of my body clearly visible. img130 I'm in a dark corner near the door, about 3/4 visible but dark. img028 is me in pretty much full view in front of both cars. there should only be the 2 cars and 1 person detected in each of the images.
YOLOv8 1.4.3 GPU(CUDA)
img629
# Label Confidence
0 car 88%
1 person 82%
2 car 73%
3 person 44%
Processed by ObjectDetectionYOLOv8
Processed on localhost
Analysis round trip 299 ms
Processing 234 ms
Inference 231 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:31:37 GMT
img130
# Label Confidence
0 car 89%
1 person 54%
2 car 51%
Processed by ObjectDetectionYOLOv8
Processed on localhost
Analysis round trip 282 ms
Processing 227 ms
Inference 225 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:34:06 GMT
img028
# Label Confidence
0 person 93%
1 car 88%
2 car 80%
Processed by ObjectDetectionYOLOv8
Processed on localhost
Analysis round trip 280 ms
Processing 230 ms
Inference 229 ms
Timestamp (UTC) Tue, 30 Apr 2024 05:08:42 GMT
YOLOv5 6.2 1.9.1 GPU(CUDA)
img629
# Label Confidence
0 person 77%
1 car 58%
2 person 46%
Processed by ObjectDetectionYOLOv5-6.2
Processed on localhost
Analysis round trip 255 ms
Processing 195 ms
Inference 193 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:36:52 GMT
img130
# Label Confidence
0 car 55%
1 person 47%
Processed by ObjectDetectionYOLOv5-6.2
Processed on localhost
Analysis round trip 415 ms
Processing 334 ms
Inference 333 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:36:07 GMT
img028
# Label Confidence
0 person 85%
1 person 60%
2 car 56%
Processed by ObjectDetectionYOLOv5-6.2
Processed on localhost
Analysis round trip 188 ms
Processing 131 ms
Inference 128 ms
Timestamp (UTC) Tue, 30 Apr 2024 05:06:13 GMT
here is the startup of the coral module, cant see any errors or anything untoward.. same same when loading any of the models
12:38:49:Update ObjectDetectionCoral. Setting AutoStart=true
12:38:49:Restarting Object Detection (Coral) to apply settings change
12:38:49:Running module using: C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\bin\windows\python39\venv\Scripts\python
12:38:49:Starting C:\Program Files...ws\python39\venv\Scripts\python "C:\Program Files...ectdetection_coral_adapter.py"
12:38:49:
12:38:49:Attempting to start ObjectDetectionCoral with C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\bin\windows\python39\venv\Scripts\python "C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\objectdetection_coral_adapter.py"
12:38:49:
12:38:49:Module 'Object Detection (Coral)' 2.2.2 (ID: ObjectDetectionCoral)
12:38:49:Valid: True
12:38:49:Module Path: root\modules\ObjectDetectionCoral
12:38:49:AutoStart: True
12:38:49:Queue: objectdetection_queue
12:38:49:Runtime: python3.9
12:38:49:Runtime Loc: Local
12:38:49:FilePath: objectdetection_coral_adapter.py
12:38:49:Start pause: 1 sec
12:38:49:Parallelism: 16
12:38:49:LogVerbosity:
12:38:49:Platforms: all
12:38:49:GPU Libraries: installed if available
12:38:49:GPU Enabled: enabled
12:38:49:Accelerator:
12:38:49:Half Precis.: enable
12:38:49:Environment Variables
12:38:49:CPAI_CORAL_MODEL_NAME = MobileNet SSD
12:38:49:CPAI_CORAL_MULTI_TPU = True
12:38:49:MODELS_DIR = root\modules\ObjectDetectionCoral\assets
12:38:49:MODEL_SIZE = small
12:38:49:
12:38:49:Started Object Detection (Coral) module
12:38:56:objectdetection_coral_adapter.py: MODULE_PATH: C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral
12:38:56:objectdetection_coral_adapter.py: MODELS_DIR: C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets
12:38:56:objectdetection_coral_adapter.py: CPAI_CORAL_MODEL_NAME: mobilenet ssd
12:38:56:objectdetection_coral_adapter.py: MODEL_SIZE: small
12:38:56:objectdetection_coral_adapter.py: Running init for Object Detection (Coral)
12:38:56:objectdetection_coral_adapter.py: CPU_MODEL_NAME: tf2_ssd_mobilenet_v2_coco17_ptq.tflite
12:38:56:objectdetection_coral_adapter.py: TPU_MODEL_NAME: tf2_ssd_mobilenet_v2_coco17_ptq_edgetpu.tflite
12:38:56:objectdetection_coral_adapter.py: TPU detected
12:38:56:objectdetection_coral_adapter.py: Attempting multi-TPU initialisation
12:38:56:objectdetection_coral_adapter.py: Supporting multiple Edge TPUs
and here is the result from the various models. i had to change them from the server page settings because selecting them on the explorer crashes the USB every time without fail.
MobileNet SSD
img629
# Label Confidence
0 car 63%
Processed by ObjectDetectionCoral
Processed on localhost
Analysis round trip 82 ms
Processing 30 ms
Inference 15 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:48:22 GMT
img130
# Label Confidence
0 car 74%
Processed by ObjectDetectionCoral
Processed on localhost
Analysis round trip 104 ms
Processing 35 ms
Inference 17 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:47:51 GMT
img028
# Label Confidence
0 train 63%
1 person 57%
Processed by ObjectDetectionCoral
Processed on localhost
Analysis round trip 85 ms
Processing 33 ms
Inference 16 ms
Timestamp (UTC) Tue, 30 Apr 2024 05:01:31 GMT
YOLOv5
img629
# Label Confidence
0 car 63%
Processed by ObjectDetectionCoral
Processed on localhost
Analysis round trip 87 ms
Processing 32 ms
Inference 15 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:51:40 GMT
img130
# Label Confidence
0 car 74%
Processed by ObjectDetectionCoral
Processed on localhost
Analysis round trip 92 ms
Processing 34 ms
Inference 18 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:50:55 GMT
img028
# Label Confidence
0 train 63%
1 person 57%
Processed by ObjectDetectionCoral
Processed on localhost
Analysis round trip 114 ms
Processing 50 ms
Inference 28 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:59:26 GMT
YOLOv8
img629
# Label Confidence
0 car 63%
Processed by ObjectDetectionCoral
Processed on localhost
Analysis round trip 98 ms
Processing 33 ms
Inference 16 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:53:56 GMT
img130
# Label Confidence
0 car 74%
Processed by ObjectDetectionCoral
Processed on localhost
Analysis round trip 96 ms
Processing 39 ms
Inference 20 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:53:17 GMT
img028
# Label Confidence
0 train 63%
1 person 57%
Processed by ObjectDetectionCoral
Processed on localhost
Analysis round trip 86 ms
Processing 36 ms
Inference 17 ms
Timestamp (UTC) Tue, 30 Apr 2024 04:57:31 GMT
img130 is a nice to have detection, img629 is the absolute minimum and img028 should never ever fail no matter what. with img629 detections being very accurate and >70% confidence for all 3 objects using YOLOv8 i really don't understand how the coral cant keep up. that's a 25% reduction in confidence for the first item.
I just got my hands on a second coral.. so i really hope they get figured out and work properly
|
|
|
|
|
|