Alex Bezdieniezhnykh
|
c0f8dd792d
|
fixed console Log
fix same files problem in python different libs
correct command logging in command handler
|
2025-06-14 21:01:32 +03:00 |
|
Alex Bezdieniezhnykh
|
7750025631
|
separate load functionality from inference client to loader client. Call loader client from inference to get the model.
remove dummy dlls, remove resource loader from c#.
TODO: Load dlls separately by Loader UI and loader client
WIP
|
2025-06-06 20:04:03 +03:00 |
|
dzaitsev
|
d92da6afa4
|
Errors sending to UI
notifying client of AI model conversion
|
2025-05-14 12:43:50 +03:00 |
|
Alex Bezdieniezhnykh
|
e9a44e368d
|
autoconvert tensor rt engine from onnx to specific CUDA gpu
|
2025-04-24 16:30:21 +03:00 |
|
Alex Bezdieniezhnykh
|
e798af470b
|
read cdn yaml config from api
automate tensorrt model conversion in case of no existing one for user's gpu
|
2025-04-23 23:20:08 +03:00 |
|
Alex Bezdieniezhnykh
|
b21f8e320f
|
fix bug with annotation result gradient stops
add tensorrt engine
|
2025-04-02 00:29:21 +03:00 |
|
Alex Bezdieniezhnykh
|
d93da15528
|
fix switcher between modes in DatasetExplorer.xaml
|
2025-03-02 21:32:31 +02:00 |
|
Alex Bezdieniezhnykh
|
e329e5bb67
|
make start faster
|
2025-02-12 13:49:01 +02:00 |
|
Alex Bezdieniezhnykh
|
9973a16ada
|
print detection results
|
2025-02-10 18:02:44 +02:00 |
|
Alex Bezdieniezhnykh
|
c1b5b5fee2
|
use nms in the model itself, simplify and make postprocess faster.
make inference in batches, fix c# handling, add overlap handling
|
2025-02-10 14:55:00 +02:00 |
|
Alex Bezdieniezhnykh
|
ba3e3b4a55
|
move python inference to Azaion.Inference folder
|
2025-02-06 10:48:03 +02:00 |
|