Commit Graph

23 Commits

Author SHA1 Message Date
Alex Bezdieniezhnykh c0f8dd792d fixed console Log
fix same files problem in python different libs
correct command logging in command handler
2025-06-14 21:01:32 +03:00
Alex Bezdieniezhnykh 6f297c4ebf write logs for inference and loader to file 2025-06-14 16:08:32 +03:00
Alex Bezdieniezhnykh 8aa2f563a4 consolidate CommonSecurity to Common.dll 2025-06-13 23:06:48 +03:00
Alex Bezdieniezhnykh 904bc688ca fix inference bug in loading model 2025-06-11 07:23:14 +03:00
Alex Bezdieniezhnykh dcd0fabc1f add loader and versioning 2025-06-10 08:53:57 +03:00
Alex Bezdieniezhnykh 7750025631 separate load functionality from inference client to loader client. Call loader client from inference to get the model.
remove dummy dlls, remove resource loader from c#.

TODO: Load dlls separately by Loader UI and loader client

WIP
2025-06-06 20:04:03 +03:00
dzaitsev d92da6afa4 Errors sending to UI
notifying client of AI model conversion
2025-05-14 12:43:50 +03:00
Alex Bezdieniezhnykh 28069f63f9 Reapply "import Tensorrt not in compile time in order to dynamically load tensorrt only if nvidia gpu is present"
This reverts commit cf01e5d952.
2025-04-30 23:47:46 +03:00
Alex Bezdieniezhnykh cf01e5d952 Revert "import Tensorrt not in compile time in order to dynamically load tensorrt only if nvidia gpu is present"
This reverts commit 1c4bdabfb5.
2025-04-30 23:32:03 +03:00
Alex Bezdieniezhnykh 1c4bdabfb5 import Tensorrt not in compile time in order to dynamically load tensorrt only if nvidia gpu is present 2025-04-30 23:08:53 +03:00
Alex Bezdieniezhnykh a1ee077e0a split to 2 files: tensorrt_engine and onnx engine 2025-04-30 19:33:59 +03:00
Alex Bezdieniezhnykh e9a44e368d autoconvert tensor rt engine from onnx to specific CUDA gpu 2025-04-24 16:30:21 +03:00
Alex Bezdieniezhnykh e798af470b read cdn yaml config from api
automate tensorrt model conversion in case of no existing one for user's gpu
2025-04-23 23:20:08 +03:00
Alex Bezdieniezhnykh b21f8e320f fix bug with annotation result gradient stops
add tensorrt engine
2025-04-02 00:29:21 +03:00
Alex Bezdieniezhnykh 6429ad62c2 refactor external clients
put model batch size as parameter in config
2025-03-24 00:33:41 +02:00
Alex Bezdieniezhnykh d93da15528 fix switcher between modes in DatasetExplorer.xaml 2025-03-02 21:32:31 +02:00
Alex Bezdieniezhnykh 961d2499de fix inference
fix small issues
2025-02-14 09:00:04 +02:00
Alex Bezdieniezhnykh e329e5bb67 make start faster 2025-02-12 13:49:01 +02:00
Alex Bezdieniezhnykh 43cae0d03c make cython app exit correctly 2025-02-11 20:40:49 +02:00
Alex Bezdieniezhnykh 9973a16ada print detection results 2025-02-10 18:02:44 +02:00
Alex Bezdieniezhnykh 0f13ba384e complete requirements.txt list
fix build.cmd
2025-02-10 16:39:44 +02:00
Alex Bezdieniezhnykh c1b5b5fee2 use nms in the model itself, simplify and make postprocess faster.
make inference in batches, fix c# handling, add overlap handling
2025-02-10 14:55:00 +02:00
Alex Bezdieniezhnykh ba3e3b4a55 move python inference to Azaion.Inference folder 2025-02-06 10:48:03 +02:00