Selected Research Projects

IBM Watson Recruitment Talent Management

Team Size: 7
Team Member(s): Joydeep Mondal, Sudhanshu Shekhar Singh, Ritwik Chaudhuri, Manish Kataria, Kushal Mukherjee, Gyana Parija
Technologies/Concepts: PySpark, Kafka, Cloudant, ObjectStore, Theano, LSTM, SOAR, AnyLogic

IBM Watson™ Recruitment is a cognitive talent management solution that increases recruiter efficiency to allow HR to improve and accelerate people’s impact on the business. It automatically predicts, without bias, the best suited candidates who are most likely to succeed in an organization. In my current job role I work on both research and development aspects of it’s cognitive component -

  • Machine Learning Pipeline Design - Designing the learning and real-time multi-client classification architecture pipeline for predictive analytics on kafka and pyspark.
  • Dynamic Taxonomy Generation - Implementing a LSTM based architecture for generating taxonomies from a dataset of skill names. (Upcoming Paper Submission)
  • Job Similarity Computation - Exploiting the semantic and syntactic structure of job description documents to find job similarity.
  • Collaborative Cognition - Developing a knowledge exchange framework for different stakeholders in the recruitment process. Modelled as independent cognitive agents, each stakeholder gets to influence and get influenced by other agents, eventually leading to a more adoptible list of candidates. (Upcoming Paper Submission)

July'16-Present

Cogniculture Human-Machine Interaction

Team Size: 10
Team Member(s): Rakesh Pimplikar, Kushal Mukherjee, Gyana Parija, Ramasuri Naraynam, Rohith Vallam, Harith Vishvakarma, Ritwik Chaudhuri, Joydeep Mondal, Manish Kataria

Research in Artificial Intelligence is breaking technology barriers every day. New algorithms and high performance computing are making things possible which we could only have imagined earlier. People in AI community have diverse set of opinions regarding the pros and cons of AI mimicking human behavior. Instead of worrying about AI advancements, we propose a novel idea of cognitive agents, including both human and machines, living together in a complex adaptive ecosystem, collaborating on human computation for producing essential social goods while promoting sustenance, survival and evolution of the agents’ life cycle. We highlight several research challenges and technology barriers in achieving this goal. We propose a governance mechanism around this ecosystem to ensure ethical behaviors of all cognitive agents. Along with a novel set of use-cases of Cogniculture, we discuss the road map ahead for this journey.

Jan'17 - present

VisualHashtags Visual Summarization of Social Media Events

Guide: Dr. Ponnurangam Kumaraguru, Dr. AV Subramanyam
Team Size: 2
Team Member(s): Sonal Goel
Technologies/Concepts: Matlab, Discriminative Learning, Object Recognition, Filtering Social Media Datasets

In this paper we propose a methodology for visual event summarization by extracting mid-level visual elements from images associated with social media events on TwiŠtter (#VisualHashtags). ŒThe key research question is Which elements can visually capture the essence of a viral event? hence explain its virality, and summarize it. Compared to the existing approaches of visual event summarization on social media data, we aim to discover #VisualHashtags, i.e., meaningful patches that can become the visual analog of a regular text hashtag that TwiŠtter generates. Our algorithm incorporates a multi-stage €filtering process and social popularity based ranking to discover mid-level visual elements, which overcomes the challenges faced by direct application of the existing methods.

May'16-July'16

Multi-Sensor Data Fusion for Human Activity Recognization

First Prize, Technical Paper Presentation, Cogenesis 2016, Delhi Technological University
Guide: Dr. Richa Singh
Team Size: 2
Team Member(s): Anchita Goel
Course: Machine Learning
Technologies/Concepts: Matlab, OpenCV, Optical Flow, Signal Processing, Data Fusion

Human activity recognition is a well known area of research in pervasive computing, which involves detecting activity of an individual by using various types of sensors. This finds great utility in the context of human-centric problems not only for purposes of tracking ones daily activities but also in monitoring activities of others - like the elderly, patrol officers, etc for purposes of health-care and security. With the growth of interest in AI, such a system can provide useful information to make the agent much more intelligent and aware about the user, thus giving a more personalized experience. Several technologies have been used to get estimates of a person’s activity like sensors found in smartphones(accelerometer, gyroscope, magnetometer etc.), egocentric cameras, other wearable sensors to measure vital signs like heart rate, respiration rate and skin temperature (apart from the same data provided by smartphones), worn on different parts of the body like chest, wrist, ankles, environment sensors to measure humidity, audio level, temperature etc. However, to the best of our knowledge we have come across no work where a fusion of these sensors and egocentric cameras has been put to use. In this paper we explore the suggested fusion of sensors and share the results obtained. Our fusion approach shows significant improvement over using both the chosen sensors independently.

Aug'15-Dec'15

SLAM in Egocentric Videos

Guide: Dr. Saket Anand, Dr. Chetan Arora
Team Size: 1
Technologies/Concepts: Matlab, ROS, C++, Visual Studio, Lie Algebra, Pose Estimation, Visual Odometry, Structure From Motion, Bundle Adjustment

Body mounted and vehicle cameras are becoming increasingly popular with the Internet overflowing with content from car dashboards, video bloggers, and even law enforcement officers. Analyzing these videos and getting more information about the anonymous entity has become a growing topic of study among vision groups across the world. A challenging task in this area of work is of localizing the anonymous entity in its surroundings without the use global systems such as GPS (which may prove to be unfeasible, unreliable or erratic in many situations). In this report we present a comprehensive study of Simultaneous Location and Mapping (SLAM) algorithms evaluating their application in detecting egomotion in egocentric videos, finally leading to the development of a Visual Positioning System.

Jan'15-May'16

Distress Detection

Best Demo, Research Showcase'14, IIIT-Delhi
Guide: Dr. Sanjit Kaul
Team Size: 2
Team Member(s): Anil Sharma, PhD Candidate IIIT-Delhi
Technologies/Concepts: Machine Learning, Android, PHP, Matlab, Signal Processing

We investigate an unobtrusive and 24×7 human distress detection and signaling system, Always Alert, that requires the smartphone, and not its human owner, to be on alert. The system leverages the microphone sensor, at least one of which is available on every phone, and assumes the availability of a data network. We propose a novel two-stage supervised learning framework, using support vector machines (SVMs), that executes on a user’s smartphone and monitors natural vocal expressions of fear — screaming and crying in our study — when a human being is in harm’s way.

Aug'14-May'15

Multi-Agent Path Planning in Warehouse Butlers Artificial Intelligence

Guide: Dr. Sandip Aine
Team Size: 2
Team Member(s): Anchita Goel
Course: Artificial Intelligence
Technologies/Concepts: Java AWT, Java Graphix, Multi-Agent A Star,

The boom in the e-commerce industry has lead to the cropping up of a large number of warehouses all over the globe. Automating the delivery processes in these warehouses is a growing requirement to achieve a reduced cost in terms of manpower and increased efficiency in terms of time taken. Multi-agent path planning is a crucial aspect of this challenge. Naive approaches such as a complete A* over all combination of butlers and targets do not work in this case due to the huge statespace and neither does Local Repair A* where each butler selfishly moves toward the target and replans only on collision in a blindfolded manner. In this project we have implemented the MAPP algorithm and prepared a Simulation bench to run any placement of walls, butlers and items, by hacking an opensource version of pacman. We evaluate multiple simulations of butlers and warehouse architectures, and detail our observations on the same.

Aug'15-Dec'15

Identifying Prolonged Narcotics Users Face Images

Guide: Dr. Richa Singh
Team Size: 2
Team Member(s): Prateekshit Pandey
Course: Pattern Recognition
Technologies/Concepts: Matlab, Machine Learning, Pattern Recognition

World Drug Report by United Nations Office on Drugs and Crimes in 2014 clearly suggests that during the period 2003-2012 the increase in crime rates for possession for personal use worldwide was due to the increase in the total number of drug users, esp. cannabis and ATS (Amphetamine-Type Stimulants). Also with the recent improvements in the CCTV surveillance and the introduction of wearable video cameras for police officers in the United States and some other countries, a large amount of data is available for biometric analysis. We propose a system which can use the data of face images from such sources and identify faces possibly altered by prolonged narcotic drug usage. Experiments were conducted majorly on before-after drug mug-shot images made public by Multinomah Sheriff County Office. We use three different types of feature extraction techniques: HoG, Local Binary Patterns and Color Histogram, over which we apply a Support Vector Machine with different kernels to classify the face images.

Jan'15-May'15