|
IF YOU ARE HAVING A PROBLEM
- Take a look at the logs in
C:\Program Files\CodeProject\AI\logs and see if there's anything in there that screams 'something broke'.
- Check the FAQs in the CodeProject.AI Server documentation
- Make sure you've tested the server using the Explorer (blue link, top middle of the dashboard) to ensure it's a server issue rather than something else such as Blue Iris or another app using CodeProject.AI server.
- If there's no obvious answer, then copy and paste into a message the contents of the System Info tab, describe what you are doing, and what you see, and what you would expect.
Always include a copy and paste from the System Info tab of the dashboard. It gives us a ton of info on your setup. If an individual module is failing, click the 'Info' button to the right of the module's name in the status list and copy and paste that info too.
How to reinstall a module
Option 1. Go to the install modules tab on the dashboard and try re-installing the package. Make sure you have enough disk space and a reliable internet connection.
Option 2: (Option 1 with a vengeance): If that fails, head to the module's folder ([app root]\modules\module-id), open a terminal in admin mode, and run ..\..\setup . This will force a manual reinstall using the install script.
Docker: In Docker you will need to open a terminal into the docker container. You can do this using Docker Desktop, or Visual Studio Code with the Docker remote extension, or on the command using using docker attach . Then do a cd /app/modules/module-id where module-id is the id of the module you need to resinstall. Next, run sudo bash ../../setup.sh --verbosity info to force a manual reinstall of that module. (Set verbosity as quiet, info or loud to get less or more info)
cheers
Chris Maunder
modified 18-Feb-24 15:48pm.
|
|
|
|
|
If you are a Blue Iris user and you are using custom models, then you would notice that the option, in Blue Iris, to set the custom model location is greyed out. This is because Blue Iris does not currently make changes to CodeProject.AI Server's settings. It can be done by manually starting CodeProject.AI with command line parameters (not a great solution), or editing the module settings files (a little messy), or setting system-wide environment variables (way easier). For version 1.6 we added an API to allow any app to change our settings programmatically, and we take care of stopping/restarting things and persisting the changes.
So: Blue Iris doesn't currently change CodeProject.AI Server's settings, so it doesn't provide you a way to change the custom model folder location from within Blue Iris.
Blue Iris will still use the contents of this folder to determine the calls it makes. If you don't specify a model to use in the Custom Models textbox, then Blue Iris will use all models in the custom models folder that it knows about.
Here we've specified a specific model to use. The Blue Iris help file explains more about how this works, including inclusive and exclusive filters on the models it finds.
CodeProject.AI Server doesn't know about Blue Iris' folder, so it can't tell what models it may be expected to use, nor can it tell Blue Iris about what models CodeProject.AI server has available. Our API allows Blue Iris to get a list of the AI models installed with CodeProject.AI Server, and also to set the folder where these models reside. But Blue Iris doesn't, yet, use that API.
So we do a hack.
At install time we sniff the registry to find where Blue Iris thinks the custom models should be. We then make empty copies of the models that we have, and copy them into that folder. If the folder doesn't exist (eg you were using C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\assets , which no longer exists) then we create that folder, and then copy over the empty files.
When Blue Iris looks in that folder to decide what custom calls it can make, it sees the models, notes their names, and uses those names in the calls. CodeProject.AI Server has those models, so when the calls come through we can process them.
Blue Iris doesn't use the models. It uses the list of model names.
If you have your own models in the Blue Iris folder
You will need to copy them to the CodeProject.AI server's custom model folder (by default this is C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models )
If you've modified the registry and have your own custom models
If you were using a folder in C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\ (which no longer existed after the upgrade, but was recreated by our hack) you'll need to re-copy your custom model into that folder.
The simplest solutions are:
- Modify the registry (Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software\Blue Iris\Options\AI, key 'deepstack_custompath') so Blue Iris looks in
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models for custom models, and copy your models into there.
or
- Modify
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\modulesettings.json file and set CUSTOM_MODELS_DIR to be whatever Blue Iris thinks the custom model folder is.
cheers
Chris Maunder
|
|
|
|
|
Q1. Can this be run on Windows 7 SP1? If unable that's too bad. (many others like "LocalAI" never answer this question)
Q2. What is the minimum reqirement to use this for text generation/chat? How much memory do I need and/or GPU?
Q3. Can I use this without internet (offline); does this really work without it? I'm planning to download the installer and install offline.
Q4. Does your AI can process non-English language? Can I teach it based on my text data?
|
|
|
|
|
My primary host is running UnRaid 6.12.9 with a CPAI docker container and a Windows VM running BlueIris. I have another Linux Mint machine on the network running CPAI with no docker container. Both instances of CPAI can see each other, sort of.
If I set Blue Iris to target the remote Linux machine, everything works fine, excess requests get passed back to the Unraid machine, processed, sent back. Everything is happy.
But if I set Blue Iris to target the docker running on the same machine as itself, it still sees the other Mint machine and tries to send overflow to it, but times out with no response.
The status of the Unraid server from the Mint remote wavers from active true to active false every few seconds.
I just upgraded to CPAI 2.6.5 but had the same behaviour under 2.6.2. UDP ports are open to the docker container.
I want to keep the docker on the Unraid as the primary instance because the remote mint machine is used frequently and is prone to going down for various reasons, and blue iris is unable to automatically switch servers.
From the Unraid Server:
Current Server mesh status
UnraidServer
Hostname: 172.17.0.8
System: Docker (Linux) Tesla P4
Platform: Docker
Active: true
Forwarding Requests: true
Accepting Requests: true
Visible Servers:
mintMachine
Routes Available: (16366 processed)
vision/custom 43.7ms (avg process time), 15756 processed
vision/custom/list 0ms (avg process time), 0 processed
vision/detection 0ms (avg process time), 0 processed
vision/face 20ms (avg process time), 610 processed
vision/face/match 0ms (avg process time), 0 processed
Remote Servers in mesh: 1
mintMachine
Hostname: mintMachine
System: Linux (Linux) NVIDIA GeForce GTX 1660 Ti with Max-Q Design
Platform: Linux
Active: true
Forwarding Requests: true
Accepting Requests: true
Visible Servers:
UnraidServer
Routes Available: (3 processed)
vision/custom 3000ms (avg round trip), 2 requests forwarded
vision/custom/list 0ms (avg round trip), 0 requests forwarded
vision/detection 0ms (avg round trip), 0 requests forwarded
vision/face 0ms (avg round trip), 1 requests forwarded
vision/face/match 0ms (avg round trip), 0 requests forwarded
From the Linux Mint remote machine:
Current Server mesh status
mintMachine
Hostname: mintMachine
System: Linux (Linux) NVIDIA GeForce GTX 1660 Ti with Max-Q Design
Platform: Linux
Active: true
Forwarding Requests: true
Accepting Requests: true
Visible Servers:
UnraidServer
Routes Available: (0 processed)
vision/custom 0ms (avg process time), 0 processed
vision/custom/list 0ms (avg process time), 0 processed
vision/detection 0ms (avg process time), 0 processed
vision/face 0ms (avg process time), 0 processed
vision/face/match 0ms (avg process time), 0 processed
Remote Servers in mesh: 1
UnraidServer
Hostname: 192.168.1.101
System: Docker (Linux) Tesla P4
Platform: Docker
Active: false
Forwarding Requests: true
Accepting Requests: true
Visible Servers:
mintMachine
Routes Available: (0 processed)
vision/custom 0ms (avg round trip), 0 requests forwarded
vision/custom/list 0ms (avg round trip), 0 requests forwarded
vision/detection 0ms (avg round trip), 0 requests forwarded
vision/face 0ms (avg round trip), 0 requests forwarded
vision/face/match 0ms (avg round trip), 0 requests forwarded
modified 9hrs 5mins ago.
|
|
|
|
|
I think I might have got this licked.
I added the IP address of the Mint remote machine to the known mesh servers in appdata/codeprojectai/data/serversettings.json on the Unraid server
"KnownMeshHostnames": [ "192.168.1.103" ],
I already had it in the appsettings.json on the Mint machine pointing it at the Unraid server, so not sure if you need both pointing at each other, but it seems to be working.
Hopefully this helps out someone having the same issues as me!
|
|
|
|
|
I am attempting to run image codeproject/ai-server:cuda12_2 (current) under Docker running on Fedora 39. The server has abundant resources with 256 GB of RAM. As far as I know, Docker is not imposing memory limits. When I start the container, codeproject.ai starts normally and without errors. However, it crashes after 5 or 6 minutes with "out of memory","codeproject excited with code 139." The system log shows "systemd-coredump[1460173]: Process 1451539 (CodeProject.AI.) of user 0 dumped core.#012#012Stack trace of thread 882:#012#0 0x00007fbf944bc898 n/a (/usr/lib/x86_64-linux-gnu/libc.so.6 + 0x28898)#012#1 0x00007fafd2a00640 n/a (n/a + 0x0)#012ELF object binary architecture: AMD x86-64."
The container crashes whether or not it has been accessed, and whether or not it has claimed GPU resources. As long as it is running, it readily accepts images and performs comparisons, using about 1GB of GPU memory and around 3 GB of RAM. However, it still crashes.
I have searched and can't find anyone else with this problem, suggesting that it is something in my environment, but I can't figure out what it could be. I would appreciate any ideas.
|
|
|
|
|
I'm running the 12_2 CUDA Docker version...
This is what I see:
I also tried the version 11 Cuda Docker as well.
Is it just me or did this change?
|
|
|
|
|
Hi all. I am a long term Windows and BlueIris user but a novice with linux etc.
In an effort to use the mesh capabilities of CodeProject.AI on BlueIris, I have managed to get Mendel running on a Google Coral Dev Board and now want to install CodeProject.ai to the dev board - and am struggling so would really appreciate assistance, please.
I couldn't find any specific guidance for this board so am following the general installation guide
sudo apt install dotnet-sdk-7.0 appears to be failing with this output:
mendel@coy-apple:~$ sudo apt install dotnet-sdk-7.0
Reading package lists... Done
Building dependency tree... Done
E: Unable to locate package dotnet-sdk-7.0
E: Couldn't find any package by glob 'dotnet-sdk-7.0'
E: Couldn't find any package by regex 'dotnet-sdk-7.0'
mendel@coy-apple:~$
What am I doing wrong, please?
|
|
|
|
|
Mendel is essentially Debian so you could try using the Ubuntu .deb installer
cheers
Chris Maunder
|
|
|
|
|
I'm using codeproject 2.6.5 and when installing LlamaChat the following error occurs:
Installing simple Python bindings for the llama.cpp library...(⌠failed check) done
Soon after:
23:37:26:LlamaChat: Traceback (most recent call last):
23:37:26:LlamaChat: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat_adapter.py", line 16, in
23:37:26:LlamaChat: from llama_chat import LlamaChat
23:37:26:LlamaChat: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat.py", line 7, in
23:37:26:LlamaChat: from llama_cpp import ChatCompletionRequestSystemMessage, \
23:37:26:LlamaChat: ModuleNotFoundError: No module named 'llama_cpp'
what is happening?
|
|
|
|
|
Thanks very much for your message. It could be the module did not install correctly. Could you please try re-installing it?
If the same thing happens, could you please C:\Program Files\CodeProject\AI\modules\LlamaChat and share your install.log (as well as System Info tab from your CodeProject.AI Server dashboard)?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
hello, I already reinstalled the entire codeproject on two PCs, but none of them worked. follow the log:
=============================
2024-05-31 00:20:36: Installing CodeProject.AI Analysis Module
2024-05-31 00:20:36: ======================================================================
2024-05-31 00:20:36: CodeProject.AI Installer
2024-05-31 00:20:36: ======================================================================
2024-05-31 00:20:36: 95.3Gb of 976Gb available on
2024-05-31 00:20:36: General CodeProject.AI setup
2024-05-31 00:20:36: Creating Directories...done
2024-05-31 00:20:36: GPU support
2024-05-31 00:20:36: CUDA Present...Yes (CUDA 12.2, No cuDNN found)
2024-05-31 00:20:37: ROCm Present...No
2024-05-31 00:20:37: Checking for .NET 7.0...Checking SDKs...Upgrading: .NET is 0
2024-05-31 00:20:37: Current version is 0. Installing newer version.
2024-05-31 00:20:37: 'winget' não é reconhecido como um comando interno
2024-05-31 00:20:37: ou externo, um programa operável ou um arquivo em lotes.
2024-05-31 00:20:39: Reading LlamaChat settings.......done
2024-05-31 00:20:39: Installing module LlamaChat 1.4.4
2024-05-31 00:20:39: Installing Python 3.9
2024-05-31 00:20:39: Python 3.9 is already installed
2024-05-31 00:20:46: Creating Virtual Environment (Local)...done
2024-05-31 00:20:46: Confirming we have Python 3.9 in our virtual environment...present
2024-05-31 00:20:46: Downloading mistral-7b-instruct-v0.2.Q4_K_M.gguf
2024-05-31 00:32:41: Moving mistral-7b-instruct-v0.2.Q4_K_M.gguf into the models folder.
2024-05-31 00:32:41: Installing Python packages for LlamaChat
2024-05-31 00:32:41: [0;Installing GPU-enabled libraries: If available
2024-05-31 00:32:42: Ensuring Python package manager (pip) is installed...done
2024-05-31 00:32:52: Ensuring Python package manager (pip) is up to date...done
2024-05-31 00:32:52: Python packages specified by requirements.cuda12_2.txt
2024-05-31 00:32:59: - Installing the huggingface hub...(✅ checked) done
2024-05-31 00:33:01: - Installing disckcache for Disk and file backed persistent cache...(✅ checked) done
2024-05-31 00:33:09: - Installing NumPy, a package for scientific computing...(✅ checked) done
2024-05-31 00:33:25: - Installing simple Python bindings for the llama.cpp library...(⌠failed check) done
2024-05-31 00:33:25: Installing Python packages for the CodeProject.AI Server SDK
2024-05-31 00:33:26: Ensuring Python package manager (pip) is installed...done
2024-05-31 00:33:28: Ensuring Python package manager (pip) is up to date...done
2024-05-31 00:33:28: Python packages specified by requirements.txt
2024-05-31 00:33:32: - Installing Pillow, a Python Image Library...(✅ checked) done
2024-05-31 00:33:32: - Installing Charset normalizer...Already installed
2024-05-31 00:33:36: - Installing aiohttp, the Async IO HTTP library...(✅ checked) done
2024-05-31 00:33:39: - Installing aiofiles, the Async IO Files library...(✅ checked) done
2024-05-31 00:33:41: - Installing py-cpuinfo to allow us to query CPU info...(✅ checked) done
2024-05-31 00:33:42: - Installing Requests, the HTTP library...Already installed
2024-05-31 00:33:42: Scanning modulesettings for downloadable models...No models specified
2024-05-31 00:33:42: Traceback (most recent call last):
2024-05-31 00:33:42: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat_adapter.py", line 16, in <module>
2024-05-31 00:33:42: from llama_chat import LlamaChat
2024-05-31 00:33:42: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat.py", line 7, in <module>
2024-05-31 00:33:42: from llama_cpp import ChatCompletionRequestSystemMessage, \
2024-05-31 00:33:42: ModuleNotFoundError: No module named 'llama_cpp'
2024-05-31 00:33:43: Self test: Self-test passed
2024-05-31 00:33:43: Module setup time 00:13:05.67
2024-05-31 00:33:43: Setup complete
2024-05-31 00:33:43: Total setup time 00:13:06.86
Installer exited with code 0
===============================
|
|
|
|
|
Can you please paste the info from the System Info tab here? Otherwise we're just guess what system you have.
The translation is "'winget' is not recognized as an internal command" which means you're missing some bits. I'm guessing there may be other issues the installer may be having due to the language on your machine not being English
cheers
Chris Maunder
|
|
|
|
|
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: AMD Ryzen 9 5900X 12-Core Processor (AMD)
1 CPU x 12 cores. 24 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 3060 (12 GiB) (NVIDIA)
Driver: 536.25, CUDA: 12.2.91 (up to: 12.2), Compute: 8.6, cuDNN:
System RAM: 64 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.10
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
NVIDIA GeForce RTX 3060:
Driver Version 31.0.15.3625
Video Processor NVIDIA GeForce RTX 3060
System GPU info:
GPU 3D Usage 44%
GPU RAM Usage 10,6 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
I'm currently running 2.6.2 and it is working fine. 2.6.2 was easier to install than previous version and reflects amazing work by the team!
If I read the "release note", it only says: "2.6.5 Various installer fixes"
Given that upgrades may, or may not be, fast or even successful, based on this, I would not choose to upgrade solely for installer fixes...
But, in the UI, I see: "An update to version 2.6.5 is available Download
Support for external modules and module updates."
OK, that's a different matter... do I need to upgrade to get the updated modules? I do not see any modules available for update in the Modules control panel? I thought that was the point of modules?
Is an upgrade recommended if I already have 2.6.2 installed and functioning?
Do I need to upgrade in order to update modules or should module updates be available in 2.6.2?
A little more clarity would be helpful.
|
|
|
|
|
You do indeed need to upgrade to get the updated modules.
Generally the further we get along, the more stable CodeProject.AI Server becomes. Also, if you don't upgrade, you'll get to a point where we're patching modules, updating modules, and actively working on the latest modules with the belief the majority of users are using them. Then if you ever have a problem with your current modules or setup, you'll be that far removed from the latest version.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Server version: 2.6.5
System: Linux
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz (Intel)
1 CPU x 4 cores. 4 logical processors (x64)
GPU (Primary): HD Graphics 530 (rev 06) (Intel Corporation)
System RAM: 8 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
HD Graphics 530 (rev 06):
Driver Version
Video Processor
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
Module 'Object Detection (YOLOv5 6.2)' 1.9.1 (ID: ObjectDetectionYOLOv5-6.2)
Valid: True
Module Path: <root>/modules/ObjectDetectionYOLOv5-6.2
Module Location: Internal
AutoStart: True
Queue: objectdetection_queue
Runtime: python3.8
Runtime Location: Shared
FilePath: detect_adapter.py
Start pause: 1 sec
Parallelism: 0
LogVerbosity:
Platforms: all,!raspberrypi,!jetson
GPU Libraries: installed if available
GPU: use if supported
Accelerator:
Half Precision: enable
Environment Variables
APPDIR = <root>/modules/ObjectDetectionYOLOv5-6.2
CUSTOM_MODELS_DIR = <root>/modules/ObjectDetectionYOLOv5-6.2/custom-models
MODELS_DIR = <root>/modules/ObjectDetectionYOLOv5-6.2/assets
MODEL_SIZE = Medium
USE_CUDA = True
YOLOv5_AUTOINSTALL = false
YOLOv5_VERBOSE = false
Status Data: {
"inferenceDevice": "CPU",
"inferenceLibrary": "",
"canUseGPU": "false",
"successfulInferences": 1673,
"failedInferences": 1,
"numInferences": 1674,
"averageInferenceMs": 799.8786610878661
}
Started: 29 May 2024 6:55:47 AM Central Standard Time
LastSeen: 29 May 2024 9:03:58 AM Central Standard Time
Status: Started
Requests: 1674 (includes status calls)
Installation Log
2024-05-29 06:31:54: Setting verbosity to quiet
2024-05-29 06:31:54: Installing CodeProject.AI Analysis Module
2024-05-29 06:31:54: ======================================================================
2024-05-29 06:31:54: CodeProject.AI Installer
2024-05-29 06:31:54: ======================================================================
2024-05-29 06:31:54: 505.05 GiB of 843.02 GiB available on linux
2024-05-29 06:31:54: Installing xz-utils...
2024-05-29 06:31:56: General CodeProject.AI setup
2024-05-29 06:31:56: Setting permissions on runtimes folder...done
2024-05-29 06:31:56: Setting permissions on downloads folder...done
2024-05-29 06:31:56: Setting permissions on modules download folder...done
2024-05-29 06:31:56: Setting permissions on models download folder...done
2024-05-29 06:31:56: Setting permissions on persisted data folder...done
2024-05-29 06:31:56: GPU support
2024-05-29 06:31:56: CUDA (NVIDIA) Present: No
2024-05-29 06:31:56: ROCm (AMD) Present: No
2024-05-29 06:31:56: MPS (Apple) Present: No
2024-05-29 06:31:57: Reading module settings.......done
2024-05-29 06:31:57: Processing module ObjectDetectionYOLOv5-6.2 1.9.1
2024-05-29 06:31:57: Installing Python 3.8
2024-05-29 06:31:57: Python 3.8 is already installed
2024-05-29 06:32:02: W: https:
2024-05-29 06:32:09: Ensuring PIP in base python install... done
2024-05-29 06:32:10: Upgrading PIP in base python install... done
2024-05-29 06:32:10: Virtual Environment already present
2024-05-29 06:32:10: Checking for Python 3.8...(Found Python 3.8.19) All good
2024-05-29 06:32:12: Upgrading PIP in virtual environment... done
2024-05-29 06:32:14: Installing updated setuptools in venv... done
2024-05-29 06:32:47: Downloading Standard YOLO models...Expanding... done.
2024-05-29 06:32:47: Moving contents of models-yolo5-pt.zip to assets...done.
2024-05-29 06:33:30: Downloading Custom YOLO models...Expanding... done.
2024-05-29 06:33:30: Moving contents of custom-models-yolo5-pt.zip to custom-models...done.
2024-05-29 06:33:30: Installing Python packages for Object Detection (YOLOv5 6.2)
2024-05-29 06:33:30: Installing GPU-enabled libraries: If available
2024-05-29 06:33:31: Searching for python3-pip...All good.
2024-05-29 06:33:34: Ensuring PIP compatibility... done
2024-05-29 06:33:34: Python packages will be specified by requirements.linux.txt
2024-05-29 06:33:36: - Installing Pandas, a data analysis / data manipulation tool...Already installed
2024-05-29 06:33:37: - Installing CoreMLTools, for working with .mlmodel format models...Already installed
2024-05-29 06:33:38: - Installing OpenCV, the Open source Computer Vision library...Already installed
2024-05-29 06:33:40: - Installing Pillow, a Python Image Library...Already installed
2024-05-29 06:33:41: - Installing SciPy, a library for mathematics, science, and engineering...Already installed
2024-05-29 06:33:42: - Installing PyYAML, a library for reading configuration files...Already installed
2024-05-29 06:33:44: - Installing Torch, for Tensor computation and Deep neural networks...Already installed
2024-05-29 06:33:45: - Installing TorchVision, for Computer Vision based AI...Already installed
2024-05-29 06:38:39: - Installing Ultralytics YoloV5 package for object detection in images... (✅ checked) done
2024-05-29 06:38:41: - Installing Seaborn, a data visualization library based on matplotlib...Already installed
2024-05-29 06:38:41: Installing Python packages for the CodeProject.AI Server SDK
2024-05-29 06:38:42: Searching for python3-pip...All good.
2024-05-29 06:38:47: Ensuring PIP compatibility... done
2024-05-29 06:38:47: Python packages will be specified by requirements.txt
2024-05-29 06:38:49: - Installing Pillow, a Python Image Library...Already installed
2024-05-29 06:38:51: - Installing Charset normalizer...Already installed
2024-05-29 06:38:53: - Installing aiohttp, the Async IO HTTP library...Already installed
2024-05-29 06:38:55: - Installing aiofiles, the Async IO Files library...Already installed
2024-05-29 06:38:57: - Installing py-cpuinfo to allow us to query CPU info...Already installed
2024-05-29 06:38:59: - Installing Requests, the HTTP library...Already installed
2024-05-29 06:38:59: Scanning modulesettings for downloadable models...No models specified
2024-05-29 06:39:08: Fusing layers...
2024-05-29 06:39:09: YOLOv5.1m summary: 391 layers, 21805053 parameters, 0 gradients
2024-05-29 06:39:09: Adding AutoShape...
2024-05-29 06:39:12: Self test: Self-test passed
2024-05-29 06:39:12: Module setup time 00:07:16
2024-05-29 06:39:13: Setup complete
2024-05-29 06:39:13: Total setup time 00:07:19
Installer exited with code 0
1: Would is be possible to change the "DisableLegacyPort": parameter in the appsettings.json file from false to true? It would save some time troubleshooting when CPAI collides with another program running on the machine.
2: I might be doing this wrong, but...
I am having to do the following to get the service to run on Ubuntu 22.04:
a: copy /bin/codeproject.ai-server-2.6.5/codeproject.ai-server.service to /lib/systemd/system
b: run sudo systemctl enable codeproject.ai-server
c: reboot the machine.
Am I doing something wrong, or should this be done by the install program?
It seems to indicate that the service will start when the machine reboots, but in my experience, that does not happen.
TIA
|
|
|
|
|
Quote: Would is be possible to change the "DisableLegacyPort": parameter in the appsettings.json file from false to true?
Absolutely. We can't do that for everyone, since it's there to provide seamless legacy support, but you can just edit the value yourself and restart the server and you're good to go
Steve Winn wrote: I am having to do the following to get the service to run on Ubuntu 22.04:
The installer does this itself, and you should see the line "Adding CodeProject.AI Server to Daemon list"
It does
sudo cp "/usr/bin/codeproject.ai-server-2.6.5/codeproject.ai-server.service" /etc/systemd/system/codeproject.ai-server.service
sudo systemctl daemon-reload
It doesn't reboot, just releads, so maybe just a reboot is enough. However, if you're not seeing the system file in place then something's up.
cheers
Chris Maunder
|
|
|
|
|
Oh. So I must have an odd distro or install, as at least with 2.6.2 it didn't do that for me.
But also with earlier versions, the only way I could get a proper install was to sudo -i and run dpkg -i as root. (permission issues creating directories)
I haven't used *nix in a production environment for a very long time, so I'm really out of practice.
|
|
|
|
|
sudo'ing to install is definitely necessary.
And yeah - after being out of Unixville for many, many years it's been a learning curve (but a comfortable one) getting back into the swing of things.
All my bad habits are still nicely in place
cheers
Chris Maunder
|
|
|
|
|
I've posted this before.
You aren't tracking with me.
I always sudo when I do most admin tasks.
But with 2.5.* and 2.6.2 installing using sudo, I was still seeing permission errors.
I wasn't able to complete the install successfully until I did sudo -i to change to the root user.
root@Ubuntu:#
And then run dpkg -i <filename.deb>
I don't know why it's a permission thing.
Ubuntu 22.04 installed taking defaults, up to date, nothing special.
This happens with the CodeProject .deb files.
Most other apps I can install as sudo.
Just a curiosity as far as I'm concerned, now that I know.
Earlier versions would not open with Software Installer.
You would get Loading... forever.
Maybe that's why I started down this path. Who knows.
|
|
|
|
|
To start, as it usually asked for:
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz (Intel)
1 CPU x 4 cores. 4 logical processors (x64)
GPU (Primary): Intel(R) HD Graphics 630 (1,024 MiB) (Intel Corporation)
Driver: 31.0.101.2111
System RAM: 16 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.5
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Intel(R) HD Graphics 630:
Driver Version 31.0.101.2111
Video Processor Intel(R) HD Graphics Family
System GPU info:
GPU 3D Usage 9%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
Module 'Object Detection (YOLOv5 6.2)' 1.9.1 (ID: ObjectDetectionYOLOv5-6.2)
Valid: True
Module Path: <root>\modules\ObjectDetectionYOLOv5-6.2
Module Location: Internal
AutoStart: True
Queue: objectdetection_queue
Runtime: python3.7
Runtime Location: Shared
FilePath: detect_adapter.py
Start pause: 1 sec
Parallelism: 0
LogVerbosity:
Platforms: all,!raspberrypi,!jetson
GPU Libraries: installed if available
GPU: use if supported
Accelerator:
Half Precision: enable
Environment Variables
APPDIR = <root>\modules\ObjectDetectionYOLOv5-6.2
CUSTOM_MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\custom-models
MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\assets
MODEL_SIZE = Medium
USE_CUDA = True
YOLOv5_AUTOINSTALL = false
YOLOv5_VERBOSE = false
Status Data: {
"inferenceDevice": "CPU",
"inferenceLibrary": "",
"canUseGPU": "false",
"successfulInferences": 65,
"failedInferences": 0,
"numInferences": 65,
"averageInferenceMs": 871.9076923076923
}
Started: 29 May 2024 7:52:34 AM Central Standard Time
LastSeen: 29 May 2024 8:04:21 AM Central Standard Time
Status: Started
Requests: 65 (includes status calls)
Installation Log
2024-05-29 07:08:20: Installing CodeProject.AI Analysis Module
2024-05-29 07:08:20: ======================================================================
2024-05-29 07:08:20: CodeProject.AI Installer
2024-05-29 07:08:20: ======================================================================
2024-05-29 07:08:20: 288.3Gb of 487Gb available on
2024-05-29 07:08:21: General CodeProject.AI setup
2024-05-29 07:08:21: Creating Directories...done
2024-05-29 07:08:21: GPU support
2024-05-29 07:08:21: CUDA Present...No
2024-05-29 07:08:21: ROCm Present...No
2024-05-29 07:08:22: Checking for .NET 7.0...Checking SDKs...Upgrading: .NET is 0
2024-05-29 07:08:22: Current version is 0. Installing newer version.
2024-05-29 07:08:22: 'winget' is not recognized as an internal or external command,
2024-05-29 07:08:22: operable program or batch file.
2024-05-29 07:08:24: Reading ObjectDetectionYOLOv5-6.2 settings.......done
2024-05-29 07:08:24: Installing module Object Detection (YOLOv5 6.2) 1.9.1
2024-05-29 07:08:24: Installing Python 3.7
2024-05-29 07:08:32: Downloading Python 3.7 interpreter...Expanding...done.
2024-05-29 07:08:42: Creating Virtual Environment (Shared)...done
2024-05-29 07:08:42: Confirming we have Python 3.7 in our virtual environment...present
2024-05-29 07:09:24: Downloading Standard YOLO models...Expanding...done.
2024-05-29 07:09:25: Copying contents of models-yolo5-pt.zip to assets...done
2024-05-29 07:10:23: Downloading Custom YOLO models...Expanding...done.
2024-05-29 07:10:24: Copying contents of custom-models-yolo5-pt.zip to custom-models...done
2024-05-29 07:10:24: Installing Python packages for Object Detection (YOLOv5 6.2)
2024-05-29 07:10:24: [0;Installing GPU-enabled libraries: If available
2024-05-29 07:10:25: Ensuring Python package manager (pip) is installed...done
2024-05-29 07:10:35: Ensuring Python package manager (pip) is up to date...done
2024-05-29 07:10:35: Python packages specified by requirements.txt
2024-05-29 07:10:37: - Installing urllib3, the HTTP client for Python...(✅ checked) done
2024-05-29 07:10:51: - Installing Pandas, a data analysis / data manipulation tool...(✅ checked) done
2024-05-29 07:11:42: - Installing CoreMLTools, for working with .mlmodel format models...(✅ checked) done
2024-05-29 07:11:49: - Installing OpenCV, the Open source Computer Vision library...(✅ checked) done
2024-05-29 07:11:53: - Installing Pillow, a Python Image Library...(✅ checked) done
2024-05-29 07:12:04: - Installing SciPy, a library for mathematics, science, and engineering...(✅ checked) done
2024-05-29 07:12:05: - Installing PyYAML, a library for reading configuration files...Already installed
2024-05-29 07:12:30: - Installing PyTorch, for Tensor computation and Deep neural networks...(✅ checked) done
2024-05-29 07:13:39: - Installing TorchVision, for Computer Vision based AI...(✅ checked) done
2024-05-29 07:15:24: - Installing Ultralytics YoloV5 package for object detection in images...(✅ checked) done
2024-05-29 07:15:26: - Installing Seaborn, a data visualization library based on matplotlib...Already installed
2024-05-29 07:15:26: Installing Python packages for the CodeProject.AI Server SDK
2024-05-29 07:15:28: Ensuring Python package manager (pip) is installed...done
2024-05-29 07:15:31: Ensuring Python package manager (pip) is up to date...done
2024-05-29 07:15:31: Python packages specified by requirements.txt
2024-05-29 07:15:33: - Installing Pillow, a Python Image Library...Already installed
2024-05-29 07:15:34: - Installing Charset normalizer...Already installed
2024-05-29 07:15:40: - Installing aiohttp, the Async IO HTTP library...(✅ checked) done
2024-05-29 07:15:43: - Installing aiofiles, the Async IO Files library...(✅ checked) done
2024-05-29 07:15:46: - Installing py-cpuinfo to allow us to query CPU info...(✅ checked) done
2024-05-29 07:15:48: - Installing Requests, the HTTP library...Already installed
2024-05-29 07:15:48: Scanning modulesettings for downloadable models...No models specified
2024-05-29 07:15:54: Fusing layers...
2024-05-29 07:15:55: YOLOv5.1m summary: 391 layers, 21805053 parameters, 0 gradients
2024-05-29 07:15:55: Adding AutoShape...
2024-05-29 07:15:56: Self test: Self-test passed
2024-05-29 07:15:56: Module setup time 00:07:33.99
2024-05-29 07:15:56: Setup complete
2024-05-29 07:15:56: Total setup time 00:07:35.51
Installer exited with code 0
I upgraded from CPAI ver 2.5.6 to CPAI version 2.6.5.
I installed version 2.6.5 on both the Windows machine that runs Blue Iris and the Linux machine.
Nothing has changed regarding Blue Iris on the Windows machine.
When I check the Use custom models checkbox on the AI tab, and restart the Blue Iris service, the custom models listbox does not populate.
This occurs when I have the Blue Iris AI pointed to the localhost and the linux machine.
Clicking the open AI dashboard link on the AI tab in Blue Iris opens the dashboard on either machine.
Selecting Default object detection on either machine works without issue. Only use custom models does not work.
Restarting the Blue Iris service has always been my go to to populate this listbox, as I believe it forces Blue Iris to call this from the API reference:
POST: http:
In earliar versions, when I would restart the Blue Iris server service, I would see the list call in the CPAI server log.
In this version of CPAI (2.6.5) I do not see that call anywhere in the log.
Again, I did not change anything in Blue Iris since I installed the last stable version on about 5/2/2024.
Before I installed version 2.6.5 this morning, the listbox was being populated.
My understanding is that this will keep Blue Iris from using custom models.
I have restarted the Blue Iris server service and rebooted the Blue Iris machine multiple times with no good results.
It seems that I will need to roll back CPAI to version 2.5.6 to regain the custom models functionality.
|
|
|
|
|
Can you try hitting Ctrl+F5 on the dashboard just to ensure there's no browser caching issues?
cheers
Chris Maunder
|
|
|
|
|
|
After doing Ctrl-F5 in the browser on each machine. Windows and Linux
Works on the Windows machine (localhost) but not on the Linux machine.
With Use AI server on IP port: Localhost
Restart BI Service
Listbox populates.
With Use AI service on IP port: 192.168.1.27 (the Linux machine)
Restart BI service
Listbox is blank
Can you run the POST: http://localhost:32168/v1/vision/custom/list command from a browser?
Or do you need to use a programming language?
Just to see if it works. I've never tried it.
|
|
|
|
|