|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard, and what you are trying to do when you get this error?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
I would like to train a text module with transcribed videos (including their URL) and be able to ask the AI server some questions about the content of the video's. Would this be possible with this server? And if so please provide some suggestions.
thanks Murray D.
|
|
|
|
|
Thanks very much for your inquiry. Currently CodeProject.AI Server can only train on images, so not yet.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
First of all, nice work. Makes code writing a bit easier. It's kinda fun too, having a personal assistant. My plugin for Homeseer leverages it's built in speech engine.
Initially, on Win7, Python 3.9 is a no go. Until I figured out the missing dll.
Also, it took a bit to figure out the API, as the API docs have not been updated. Once I figured out what the return endpoint is, the rest was easy.
The only issue I see is the Llama chat module shows CPU and it looks like it is using GPU. UPDATE: It looks like it is not using the GPU.
4/11/2024 11:17:18 AM ~ Prompt Return {"success":true,"message":"Command is running in the background","commandId":"38ec745d-789f-4f73-b78b-b4be1faba3e1","commandStatus":"running","moduleId":"LlamaChat","moduleName":"LlamaChat","code":200,"command":"prompt","requestId":"38ec745d-789f-4f73-b78b-b4be1faba3e1","inferenceDevice":"CPU","analysisRoundTripMs":6,"processedBy":"localhost","timestampUTC":"Thu, 11 Apr 2024 18:17:23 GMT"}
10:23:41:Module 'LlamaChat' 1.4.1 (ID: LlamaChat)
10:23:41:Valid: True
10:23:41:
10:23:41:Attempting to start LlamaChat with C:\Program Files\CodeProject\AI\modules\LlamaChat\bin\windows\python39\venv\Scripts\python "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat_adapter.py"
10:23:41:Module Path: <root>\modules\LlamaChat
10:23:41:AutoStart: True
10:23:41:Queue: llamachat_queue
10:23:41:Runtime: python3.9
10:23:41:Runtime Loc: Local
10:23:41:FilePath: llama_chat_adapter.py
10:23:41:Start pause: 0 sec
10:23:41:Parallelism: 1
10:23:41:LogVerbosity:
10:23:41:Platforms: all,!raspberrypi,!orangepi,!jetson
10:23:41:GPU Libraries: installed if available
10:23:41:GPU Enabled: enabled
10:23:41:Accelerator:
10:23:41:Half Precis.: enable
10:23:41:Environment Variables
10:23:41:CPAI_LOG_VERBOSITY = Quiet
10:23:41:CPAI_MODULE_LLAMA_MODEL_DIR = .\models
10:23:41:CPAI_MODULE_LLAMA_MODEL_FILEGLOB = *.Q4_K_M.gguf
10:23:41:CPAI_MODULE_LLAMA_MODEL_FILENAME = mistral-7b-instruct-v0.2.Q4_K_M.gguf
10:23:41:CPAI_MODULE_LLAMA_MODEL_REPO = @TheBloke\Mistral-7B-Instruct-v0.2-GGUF
10:23:41:
10:23:41:Started LlamaChat module
10:23:47:llama_chat_adapter.py: Running init for LlamaChat
10:23:47:llama_chat_adapter.py: Using model from '.\models\mistral-7b-instruct-v0.2.Q4_K_M.gguf'
modified 19-Apr-24 11:48am.
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
It appears that the module is not reporting the usage of the GPU correctly.
Open the Task Manager, go to the performance tab and select the GPU.
Then when you run the Llama Chat, you should see a spike in the GPU usage.
"Mistakes are prevented by Experience. Experience is gained by making mistakes."
|
|
|
|
|
I did some digging and everytime I run ..\..\setup --verbose, I see it gets the wheel package everytime. Not sure if some libs are not compatible with older win. I aquired the dll workaround for Win7 and Python 3.9. The TextToImage module works fine with it's 3.9 virtual environment and uses cuda. In GPU Shark core and mem usage spikes when making a Llama chat request. I have three modules loaded and GPU Shark shows I have three python instances running. Request return from Llama chat shows "inferenceDevice":"CPU"
|
|
|
|
|
There was a bug in the detection of the GPU, which I have fixed.
The module is using the GPU, just not displaying that fact.
When we release the next set up Module updates, you will see the GPU displayed correctly.
No need to worry though, the correction does not change the operation or performance of the module.
"Mistakes are prevented by Experience. Experience is gained by making mistakes."
|
|
|
|
|
Does anyone have any idea on how to get YOLOv8 to perform segmentation? I'm using AgentDVR with CodeProject AI, and so far, what I see in the CodeProject AI's log is saying "detect". But I think if segmentation were performed, it would say "segment".
Is there any specific requirement to make YOLOv8 to perform segmentation?
I have Tesla P4 and YOLOv8 is running on GPU (CUDA).
modified 15-Apr-24 11:34am.
|
|
|
|
|
You can do it in the CodeProject.AI Server Explorer, if you're using the YOLOv8 module and check the box "Perform segmentation"
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Understood that. But my question is, when it works with the AgentDVR, is there a way to make it perform/use "segmentation" automatically instead of standard detection?
|
|
|
|
|
I think this might have to be an AgentDVR support question. I know the creator is very responsive on the AgentDVR subreddit[^].
Because the default API call goes to v1/vision/detection, and you can actually change this in the Object Detection camera settings in AgentDVR (for image segmentation its v1/vision/segmentation), but the default behaviour for AgentDVR is to draw bounding boxes around the objects on display. I don't think it even shows bounding boxes on the screengrabs or videos that get taken. So I think you'd be asking for a feature request to do this.
Thanks,
Sean Ewington
CodeProject
modified 12-Apr-24 11:23am.
|
|
|
|
|
Hi Sean, thank you for your answer. It helped! Because in AgentDVR, you can adjust the API call for the Object Detection, so I was able to change "v1/vision/detection" to "v1/vision/segmentation". After I changed it, it is now using "segment" command per the CPAI's log!
Thanks a million!
|
|
|
|
|
Hello together,
since update to 2.6.2 logging stopped now the 2nd time by random. Detection seems to work normal.
Only Yolo V5.6.2 activated in GPU mode, Face Processing installed but stopped, because of too much false negatives, detected myself instead of an real intruder.
Logging with 2.5.6 worked without problems.
EDIT: Only restarting Yolo doesn't help. The entire service must be restarted.
Server version: 2.6.2
System: Windows
Operating System: Windows (Microsoft Windows 10.0.14393)
CPUs: Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz (Intel)
1 CPU x 10 cores. 20 logical processors (x64)
GPU (Primary): NVIDIA T600 (4 GiB) (NVIDIA)
Driver: 527.41, CUDA: 12.0 (up to: 12.0), Compute: 7.5, cuDNN:
System RAM: 64 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.16
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
NVIDIA T600:
Driver Version 31.0.15.2741
Video Processor NVIDIA T600
System GPU info:
GPU 3D Usage 19%
GPU RAM Usage 3,1 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
Thanks very much for the report. Is anything showing in the logs when (or just before) this happens, or does the service suddenly go offline?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
The last entry was a detection request sent by BlueIris. Thereafter it suddenly stops without any additional message.
I'll post a screenshot if it happens again.
|
|
|
|
|
It's 3:49 PM now.
EDIT: It was the web service only. Refreshing the webpage restarted it.
modified 13-Apr-24 1:16am.
|
|
|
|
|
Next time that happens can you please open the Developer Tools for the webpage (F12 on Chrome) and let us know if there are any errors reported in the console? The webpage never starts/stops the server (just modules) so it sounds like a webpage error.
cheers
Chris Maunder
|
|
|
|
|
Since the last update there is an option to download models. This confuses me a bit. What does the green models mean, are they available? or supported?
For me it would be easier if the download section would be hidden. And if I select a model and that model is not available, it would automatically download that model.
Maybe I'm seeing this wrong. But I would like to have some clarification about how to use this.
modified 16-Apr-24 12:00pm.
|
|
|
|
|
Green means it's cached (and in theory installed), orange means ready to download. There are tooltips on the options.
cheers
Chris Maunder
|
|
|
|
|
I have latest CPAI and latest Coral module. In addition to the various “bulk in” and “bulk out” errors that cause the Coral module to shut down that I’ve already reported, I also have this thing happening every day where the Coral module silently stops responding to requests. There is no error in the logs, simply the detections stop. Blue Iris errors out for each attempted detection with “AI: not responding”
System info:
Server version: 2.6.2
System: Docker (dc5e24419dd2)
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz (Intel)
1 CPU x 4 cores. 8 logical processors (x64)
System RAM: 15 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.17
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
I also see this, I resorted to parsing the logs and restarting the service if it doesn't see any detect requests in the last 5 minutes..
Hopefully the underlying issue can be resolved, but it does make me think it would be nice if the could report either the time since the last request or the timestamp of the last received request..
And perhaps even a per-module watchdog that could run a test image/sample to confirm it at least responds, and if not restart the module?
|
|
|
|
|
How are you doing the log parsing?
|
|
|
|
|
journalctl -u codeproject.ai-server -g "took"
|
|
|
|
|
Hello,
I have codeproject.ai (2.6.2) in docker.
I do Face Recognition with http request (via API, ...32168/v1/vision/face/recognize ) and within CodeProject.AI Explorer
API shows no result but Explorer does the recognition.
Minimum Recognition Confidence is set for 0.4 for both.
That's the log via API (I use objectdetecion first and then face recognition):
00:30:15:Client request 'detect' in queue 'objectdetection_queue' (...0ca35e)
00:30:15:Request 'detect' dequeued from 'objectdetection_queue' (...0ca35e)
00:30:15:Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command 'detect'
00:30:16:Response rec'd from Object Detection (YOLOv5 6.2) command 'detect' (...0ca35e) ['Found car, person'] took 188ms
00:30:16:Client request 'recognize' in queue 'faceprocessing_queue' (...ef6481)
00:30:16:Request 'recognize' dequeued from 'faceprocessing_queue' (...ef6481)
00:30:16:Face Processing: Retrieved faceprocessing_queue command 'recognize'
00:30:16:Response rec'd from Face Processing command 'recognize' (...ef6481) ['No known faces'] took 160ms
00:30:48:Client request 'recognize' in queue 'faceprocessing_queue' (...0ef319)
00:30:48:Request 'recognize' dequeued from 'faceprocessing_queue' (...0ef319)
That's the log Explorer (face recognition):
00:30:48:Face Processing: Retrieved faceprocessing_queue command 'recognize'
00:30:48:Response rec'd from Face Processing command 'recognize' (...0ef319) ['A face was recognised'] took 153ms
Why it is so? And how to reach the same results for both ways?
modified 19-Apr-24 11:38am.
|
|
|
|