Manoosh Samiei

I am an AI Researcher at Mila, supervised by Aaron Courville. Previously I worked for two years as a computer vision researcher and 3D reconstruction engineer at Algolux, a self-driving car software startup, and Magicplan, Sensopia Inc., an augmented reality mobile app which maps indoor environments.

I did my masters at McGill in Electrical and Computer Engineering, where I was advised by James J. Clark. My master's research was on modeling visual attention and distraction, which is closely related to the saliency prediction, in visual search tasks using deep learning and eye tracking data.

I did my undergraduate at Shahid Beheshti University in Electrical Engineering with a focus on telecommunications and signal processing. My undergraduate research was focused on end-to-end training approach for lane following task (behavioral clonning) in autonomous vehicles using convolutional neural networks. During my undergraduate, I worked on multiple robotic projects and learned about electronic circuits troubleshooting and assembly.

My research interests lie at the intersection of human vision, robotics, computer vision and machine learning. Besides AI and programming, I also enjoy hardware assembly and electronic circuitry.

Email  /  LinkedIn  /  Google Scholar  /  Twitter  /  Github

profile photo
Highlights

15/09/2023 I am joining Mila soon as an AI researcher!

09/08/2022 Moving to Toronto, Ontario on 1st September!

01/08/2021 I am officially graduated from McGill with a Master of Science in Electrical Engineering!

16/08/2021 I am starting as a computer vision researcher at Algolux!

Projects
Master Research, 2021
Predicting Visual Attention and Distraction During Visual Search Using Convolutional Neural Networks
Manoosh Samiei, James J. Clark
GitHub Code / Thesis
Our dataset analysis report is available on Arxiv: Code / Paper
One publication in Journal of Vision is in progress. ArXiv Pre-print

We present two approaches. Our first method uses a two-stream encoder-decoder network to predict fixation density maps of human observers in visual search. Our second method predicts the segmentation of distractor and target objects during search using a Mask-RCNN segmentation network. We use COCO-Search18 dataset to train/finetune and evaluate our models.

Implementing DeepGaze2 Free-viewing Saliency Model, 2020
GitHub / Report

DeepGaze2 extracts high-level features in images using VGG19 convolutional neural network pretrained for object recognition task. DeepGaze II is trained using a log-likelihood learning framework, and aims to predict where humans look while free-viewing a set of images.

Object Detection with Deep Reinforcement Learning, 2020
GitHub / Report / video

We implmented two papers that formulate object localization as a dynamic Markov decision process based on deep reinforcement learning. We compare two different action settings for this MDP: a hierarchical method and a dynamic method.

NeurIPS 2019 Reproducibility Challenge
Reproducing CNN2: Viewpoint Generalization via a Binocular Vision, 2019
Report

We replicated the results of the paper “CNN2: Viewpoint Generalization via a Binocular Vision” for two datasets SmallNORB and ModelNet2D.

Implementation of End-to-End Behavioral Cloning Approach for Lane Following Task in Autonomous Vehicles using Convolutional Neural Networks, 2019
Thesis in Persian

Services

Volunteer in poster sessions in Montreal AI Symposium 2020 , and WiML 2020
Helped with locating posters and technical issues in gather town platform.

Source code and style from Jon Barron's website