Alex Bezdieniezhnykh
ca1682a86e
add gps matcher service
2025-04-14 09:50:34 +03:00
Alex Bezdieniezhnykh
73c2ab5374
stop inference on stop pressed
...
small fixes
2025-03-24 10:52:32 +02:00
Alex Bezdieniezhnykh
6429ad62c2
refactor external clients
...
put model batch size as parameter in config
2025-03-24 00:33:41 +02:00
Alex Bezdieniezhnykh
2ecbc9bfd4
move zmq port to config file for C# and python
2025-02-16 16:35:52 +02:00
Alex Bezdieniezhnykh
cfd5483a18
make python app load a bit eariler, making startup a bit faster
2025-02-13 18:13:15 +02:00
Alex Bezdieniezhnykh
c1b5b5fee2
use nms in the model itself, simplify and make postprocess faster.
...
make inference in batches, fix c# handling, add overlap handling
2025-02-10 14:55:00 +02:00
Alex Bezdieniezhnykh
739759628a
fixed inference bugs
...
add DONE during inference, correct handling on C# side
2025-02-01 02:09:11 +02:00
Alex Bezdieniezhnykh
e7afa96a0b
fix inference UI and annotation saving
2025-01-30 12:33:24 +02:00
Alex Bezdieniezhnykh
62623b7123
add ramdisk, load AI model to ramdisk and start recognition from it
...
rewrite zmq to DEALER and ROUTER
add GET_USER command to get CurrentUser from Python
all auth is on the python side
inference run and validate annotations on python
2025-01-29 17:45:26 +02:00