Projects

Current Research

Agent Corrections to Pac-Man from the Crowd
Reinforcement learning suffers from poor initial performance. Our approach uses crowdsourcing to provide non-expert suggestions to speed up learning of an RL agent. Currently, we are using Mrs. Pac-Man as our application domain for its popularity as a game. From our studies, we have already concluded that crowdsourcing, although non-experts, are good in identifying mistakes. We are now working on how we can integrate the crowd’s advice to speed up the RL agent’s learning. In the future, we intend to implement this approach to a physical robot.

Lifelong Learning for Heterogenous Robot Teams
This is a joint project of WSU, University of Pennsylvania and Olin College. This project is about developing transfer learning methods that enable teams of heterogenous agents to rapidly adapt control and coordination policies to new scenarios. Our approach uses a combination of lifelong transfer learning and autonomous instruction to support continual transfer among heterogeneous agents and across diverse tasks. The resulting multi-agent system will accumulate transferrable knowledge over consecutive tasks, enabling the transfer learning process to improve overtime and the system to become increasingly versatile. We will apply these methods to sequential decision making (SDM) tasks in dynamic environments with aerial and ground robots.

Deep Learning in Physical Robots
This project targets the application of the state-of-the-art algorithm called Deep Q Network (DQN) to robotic platforms. Can we leverage this algorithm by only using on-board or external cameras to extract relevant visual features and learn to control robots without human-engineered state features? Two known challenges with DQN are: 1) its requirement for large computing resources, and 2) it suffers from long learning times. This project focuses on identifying speed-up techniques for the successful utilization of deep Learning and reinforcement learning in robotic systems.

  • Yunshu Du, Gabriel V. de la Cruz Jr., James Irwin, and Matthew E. Taylor. Initial Progress in Transfer for Deep Reinforcement Learning Algorithms. In Proceedings of the Deep Reinforcement Learning: Frontiers and Challenges (DeepRL) workshop at the 25th International Joint Conference on Artificial Intelligence (IJCAI 2016). (✤ Authors equally contributed).

Past Projects

Identifying Allele Patterns on Anaplasma Marginele Protein Sequences
Developed a computer algorithm that accepts genome sequences as its input and be able to identify its pattern composition. This can be used to inform and expedite further studies of molecular epidemiology.

Rapid Algorithm for Detecting Antibiotic Resistance Gene Sequences from Next-Gen Sequencing Data
Developed a computer algorithm to rapidly sift through large sets of data to find all potential resistance gene sequences. We used a multiplex sequencing strategy with an Illumina MiSeq to generate genome sequences for 50 isolates of E. coli and Salmonella. To analyze these data we developed a computer algorithm based on an align-assembly approach. Sequence data were first prescreened against reference genome sequences (GenBank NC_010473.1 and NC_003197.1) using Sequence Alignment/Map (SAM) Tools and PySAM. Unmapped sequences were assumed to include antibiotic resistance genes, and these sequences were aligned against a local antibiotic resistance database using the Basic Local Alignment Search Tool (BLAST). The results of the alignment process were manipulated using BioPython to retrieve high-scoring sequences; these were assembled against corresponding antibiotic resistance gene sequences using the CLC Genomics Workbench. Data from this analysis can be used to compare resistance genes for the sequenced isolates and to develop a catalog of single-nucleotide polymorphisms (SNPs) that can then be used to develop high throughput assays to study the molecular epidemiology of antibiotic resistance.