I'm a Technology Strategic Planning at Huawei. At present, I am engaged in AI
product and
technology planning, as well as exploration and research of new directions for electronic
consumer products.
I finished my PhD at Tsinghua
University, advised by
Prof.
Yu
Zhu and Prof. Chuxiong Hu.
My interests lie at the intersection of AGI and robotics. I was a co-founder of KEYI
Technology which created the first commercial modular robotic kit in the world for
entertainment and education.
Yunan Wang, Jiayu Wang, Yixiao Li, Chuxiong Hu, Yu Zhu
Zhu
International Conference on Advanced Robotics and Mechatronics.2022 (Best Conference
Paper Finalist) Paper
For multi-step robotic manipulation, it is important but challenging to predict the future state of the object conditioned on the applied action, especially from the original sensory observation such as images. This paper proposes a latent object-centric representation (LOR) that can encode implicit visual features from raw RGB images into a compact and generalizable representation of the object states suitable for future-state prediction. Real-world experiments on pushing tasks demonstrate that the proposed method can achieve a high success rate on pushing previously unseen objects with diverse shapes and scales, outperforming state-of-the-art model-based and end-to-end methods including baselines that use ground-truth object poses. The proposed approach is an important step toward fully autonomous and generalizable visual-based robotic manipulation.
Jiayu Wang, Chuxiong Hu, Yunan Wang, Yu Zhu
Zhu
IEEE Access. 2021Paper
Understanding the physical interactions of objects with environments is critical for multi-object robotic manipulation tasks. A predictive dynamics model can predict the future states of manipulated objects, which is used to plan plausible actions that enable the objects to achieve desired goal states. In this paper, we propose a Deep Object-centric Interaction Network (DOIN), which encodes object-centric representations for multiple objects from raw RGB images and reasons about the future trajectory for each object in latent space. The proposed method is evaluated both in simulation and real-world experiments on multi-object pushing tasks. Real-world experiments demonstrate that the model trained on simulated data can be transferred to the real robot and can successfully perform multi-object pushing tasks for previously-unseen objects with significant variations in shape and size.
Jiayu Wang, Shize Lin, Chuxiong Hu, Yu Zhu, and Limin
Zhu
IEEE Robotics and Automation Letters (RA-L). 2020
Webpage •
Paper •
Code
We consider a scenario where a robot is capable of autonomously opening previously unseen doors. In this project, we propose a novel method for opening unseen doors with no prior knowledge of door model, which leverages semantic 3D keypoints as door handle representations to generate the end-effector trajectory from a motion planner. The keypoint representations are predicted from raw visual input by a deep neural network, which can provide a concise, and semantic de- scription of the handle to determine the grasp pose, and subsequent motion planning. Qualitative results show that our proposed method outperforms the state-of-the-art pose-based methods on real test sets in terms ofperception metrics. Hardware experiments demonstrate that our proposed method can achieve 94.2% success rate on opening 6 previously unseen doors with significant shape variations under different environments, and conditions.
Jiayu Wang, Chuxiong Hu, and Yu Zhu
IEEE Robotics and Automation Letters (RA-L) with IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS). 2021
Webpage •
Paper •
Code
Legged robots have the potential to exhibit an unmatched ability to perform versatile and robust locomotion. We propose a novel two-level hierarchical learning framework for quadruped locomotion control with a limited amount of prior knowledge. Our approach combines a low-level central pattern generator (CPG)-based controller with a high-level neural network to learn a variety of locomotion tasks using deep reinforcement learning. The low-level CPG controller is preoptimized to generate stable rhythmic walking gaits, while the high-level network is trained to modulate the CPG parameters for achieving the task goal based on high-dimensional inputs, including the states of the robot and user commands. The proposed approach is employed on a simulated modular quadruped. We empirically demonstrate the learned policies using our approach can allow the robot to perform multiple locomotion skills. The results show that our framework outperforms prior model-based and model-free methods in terms of robustness as well as sample efficiency.
Jiayu Wang, Chuxiong Hu, Yu Zhu
IEEE International Conference on Mechatronics. 2021
Webpage •
Paper •
Code
Modular robots have the ability to perform versatile locomotion with a high diversity of morphologies. However, designing efficient and robust locomotion gaits for arbitrary robot morphologies remains exceptionally challenging. In this paper, a two-level hierarchical locomotion framework is proposed for addressing modular robot locomotion tasks. The framework combines a central pattern generator controller (CPG) with a neural network trained using deep reinforcement learning. A high-level learned controller can modulate the low-level CPG parameters based on online inputs, including robot states and user commands. The results show that the proposed method achieves better overall performance than the baseline methods on different locomotion skills, including straight walking, velocity tracking, and circular turning. The simulation results confirm the effectiveness and robustness of the method.
CellRobot EDU, the first commercial educational modular robot in the world, inspiring creativity and innovation for K12, encourages all kids to build their own robots through hands-on exploration. An individual module can not only implement different functions independently, but also combine with others to handle more complex tasks in unique configurations. Connection by a unified port makes module installation simple and fast. Just as the cells in our bodies work together to create an organ, CellRobot is made up of single Cells to combine and build different robots. It provides hundreds of configurations and unlimited possibilities. In addition, visual script programming allows users simply create new design and efficiently set the movements without coding.
Jianbo Yang, Jiayu Wang, Yuexuan Ma, Chen Zang and Wenbai
Chen
★Awarded iF DESIGN AWARD 2018, ★
★Awarded China College Students' "Internet+" Innovation and
Entrepreneurship Competition, Gold Award
(2016) ★
Featured on the page of Forbes!
Webpage •
Kickstarter •
CellRobot I, a modular robot prototype, is developed by a group of undergraduate
students in Beihang University. One source of inspiration originates in biology systems
where lower-level cells can grow, reproduce and construct kinds of higher-level tissues.
CellRobots can also change configurations and be reassembled with one another to form a
range of new structures, allowing them to adpat to variety of tasks and environments.
Each “cell” has six attachable dockers and a DOF to allow it to spin by itself. By
attaching cells together it gains DoF and can perform various actions as required. This
version also had a detachable camera that can be installed to further expand it’s
capabilities.
This project have later evolved into a startup company - KEYi Technology Inc.
Jianbo Yang, Jiayu Wang, Yuexuan Ma, Chen Zang and Wenbai
Chen
★Awarded "Challenge Cup"
National College Student Curricular Academic Science and Technology Works
Competition, 1st Prize(2014)
★
Webpage
We develop a novel metamorphosis transformable robot, BallBot, which is able to change its own shape by opening or closing the petals in order to adapt to different complex circumstances. For example, it can crawl through the uneven terrain as the hexapod, yet it can transform itself into a sphere and roll forward when it encounters a flat or slope. A control algorithm was proposed, enabling the direction and speed of rolling through controlling to open petals in sequence. In addition, the robot, as a transfer station, is composed of carrying platforms in its upper half sphere, capable of carrying several lightweight drones which can be dispatched during outdoor operations.
Jiayu Wang, Huayong Zhao, Junhan Zhou and Yange Zeng
★ Awarded Beihang Fengru Cup Technology Competition,
1st Prize (2013) ★
Webpage
Ellie is a robotic elephant trunk with SMA (Shape Memory Alloy aka Muscle Wire), without using any electromechanicial moving parts and just relaying on the SMA itself, Ellie is capable of bending around like a real elephant trunk, the tip of Ellie also have grasps that can grip onto things just like an elephant trunk. The heating rate of the SMA is controlled to produce various stiffness of the SMA spring resulting in different bendings of the robotic trunk.
Xiaoyu Cui, Jiayu Wang, Shaoping Wang
★ Awarded Beihang Fengru Cup Technology Competition,
2st Prize (2013) ★
My long-term goal is to fulfill the vision of AGI robots which could interact freely
with human in their day-to-day lives. I'm always curious about cutting-edge technology.
Currently, I am very interested in cutting-edge research in the field of AI and robotics, such
as AIGC in daily life and the robot skill learning.
Furthermore, as an entrepreneur, I am interested in commercializing advanced technology as well
as growing rapidly start-up companies. I enjoy communicating and collaborating with friends of
other fields and disciplines.
I like reading and tennis. If you think we might have an interesting discussion on any aspects
of life, feel free to
contact me.