This page explains codes on the server side communicating with the mobile MYCam app. The main role of the server is as follows:
You can access the codes here. https://github.com/IAMLabUMD/MYCam-Server
In order to run the MYCam app, you will need to meet the following requirements:
- Python 3.6
- Ubuntu 16.04
- Tensorflow 2.0
- CUDA 8.0
To build and run the MYCam app, please follow these steps,
TOR_HTTP_Server_v3.py
with the following command.
python3 TOR_HTTP_Server_v3.py
These are brief descriptions of the classes and functions. For more details, please read the comments in the files.
TOR_HTTP_Server_v3.py
This is a simple HTTP server that calls functions in classes for using an InceptionV3 model and descriptors. You can run the server with the following script.
httpd = HTTPServer(('127.0.0.1', 8000), SimpleHTTPRequestHandler)
httpd.serve_forever()
ObjectRecognizerV2.py
DescriptorGenerator.py
HandSegmentation.py
It has a class to find pixels of a hand in an image. The hand segmentation can be used as below.
model ='TOR_hand_all+TOR_feedback_fcn8_10k_16_1e-5_450x450'
threshold = 0.5
width = 450
height = 450
# initialize Localizer and Classifier
debug = True
segmentation = Segmentation(model=model,
threshold=threshold,
image_width=width,
image_height=height,
debug=debug)
image = cv2.imread('/Users/jonggihong/Downloads/tmp_2.jpg', cv2.IMREAD_COLOR)
if width is not None and height is not None:
new_shape = (int(width), int(height))
image = cv2.resize(image, new_shape, cv2.INTER_CUBIC)
# localize an object from the input image
image, pred = segmentation.do(image)
hand_area = np.sum(pred)
ObjectDetector.py
This is a YOLOv3 object detector. See the example of usage below.
od = ObjectDetector()
od.detect('/home/jhong12/TOR-app-files/photo/TempFiles/CA238C3A-BDE9-4A7F-8CCA-76956A9ABD83/tmp_2.jpg')
CHI2017_retrain.py
GORTest.py
ObjectRecognizer.py
ObjectRecognizerV2.py
StudyHelper.py
retrain.py
Under review
Jonggi Hong jhong12@umd.edu
Hernisa Kacorri hernisa@umd.edu