Alex Bezdieniezhnykh
7750025631
separate load functionality from inference client to loader client. Call loader client from inference to get the model.
...
remove dummy dlls, remove resource loader from c#.
TODO: Load dlls separately by Loader UI and loader client
WIP
2025-06-06 20:04:03 +03:00
dzaitsev
d92da6afa4
Errors sending to UI
...
notifying client of AI model conversion
2025-05-14 12:43:50 +03:00
Alex Bezdieniezhnykh
73c2ab5374
stop inference on stop pressed
...
small fixes
2025-03-24 10:52:32 +02:00
Alex Bezdieniezhnykh
6429ad62c2
refactor external clients
...
put model batch size as parameter in config
2025-03-24 00:33:41 +02:00
Alex Bezdieniezhnykh
cfd5483a18
make python app load a bit eariler, making startup a bit faster
2025-02-13 18:13:15 +02:00
Alex Bezdieniezhnykh
739759628a
fixed inference bugs
...
add DONE during inference, correct handling on C# side
2025-02-01 02:09:11 +02:00
Alex Bezdieniezhnykh
62623b7123
add ramdisk, load AI model to ramdisk and start recognition from it
...
rewrite zmq to DEALER and ROUTER
add GET_USER command to get CurrentUser from Python
all auth is on the python side
inference run and validate annotations on python
2025-01-29 17:45:26 +02:00