The Figure humanoid robot powered by the innovative Helix vision-language-action (VLA) model, has achieved a remarkable milestone: it can now autonomously Load dirty clothes into Washing Machines and fold Laundry.
Key Takeaways:
• Figure AI Develop a Humanoid Robot Capable of effectively carrying out house chores with very impressive Precision
• The Humanoid Robot powered by a powerful Helix Processor is capable of autonomously carrying out Laundry Chores
• Helix Powered Robots have demonstrated other collaborative tasks including two robots storing groceries together
This breakthrough, announced by Figure AI marks a significant leap in robotics, demonstrating that general-purpose AI can tackle one of the most notoriously difficult household tasks for machines.
Laundry tasks might seem mundane to humans, but for robots, it represents a profound challenge in dexterous manipulation.
Soft fabrics like towels are deformable objects that lack fixed geometry. They change shape unpredictably, wrinkle, tangle, and respond differently to every touch. Traditional robotic systems, which excel at rigid objects in controlled factory environments, often fail here because they rely on precise models of object physics that simply don’t apply to crumpled clothing.
Yet, Figure’s Figure 02 (and subsequent models) humanoid, equipped with multi-fingered hands and powered by the end-to-end neural network called Helix, performs these tasks fully autonomously. No scripted code, no hand-engineered rules—just vision, language understanding, and fluid motor control flowing through a single unified model. This isn’t teleoperation or partial autonomy; the robot sees a pile of towels, hears or processes a natural language command like “wash these clothes using the washing machine” and gets to work loading dirty clothes into washing machines “fold the laundry,” and it gets to work, picking items, adjusting strategies, recovering from errors, and stacking neat folds.
Figures Humanoid Robot powered by Helix Loading Dirty Clothes into a Washing Machine
What Is Helix? The Brain Behind the Breakthrough
Helix is Figure AI’s generalist Vision-Language-Action (VLA) model, introduced earlier in 2025. It unifies three critical capabilities that have long been siloed in robotics: perception (seeing the world through cameras), language comprehension (understanding instructions and context), and action (precise control of the robot’s body).
Unlike earlier VLA systems that required task-specific fine-tuning or separate modules, Helix uses a single set of neural network weights for diverse behaviors. It features a clever “System 1, System 2” architecture:
– System 2 (S2): A larger 7-billion-parameter vision-language model (VLM), pretrained on internet-scale data, runs at 7-9 Hz. It handles high-level reasoning—understanding the scene, interpreting commands, and producing a semantic latent vector that encodes task-relevant information.
– System 1 (S1): A smaller, fast 80-million-parameter transformer acts as a reactive visuomotor policy. It processes raw visual features at high frequency and outputs continuous control signals for the robot’s entire upper body at 200 Hz.
This hybrid design resolves a classic tradeoff in robotics: balancing thoughtful reasoning (slow) with lightning-fast, precise movements (fast). The systems communicate via a shared latent space, trained end-to-end so gradients flow from actions back to perception and language understanding. Helix runs entirely onboard the robot using dual low-power embedded GPUs, making it commercially viable without relying on massive cloud servers.
Training involved around 500 hours of high-quality teleoperated data from multiple robots and operators—far less than many prior VLA efforts. Instructions were auto-labeled using VLMs on video clips, creating natural language pairings. The result? A model that generalizes remarkably well, even to novel objects and situations.
From Logistics to Laundry
One of the most impressive aspects of the laundry demonstration is its simplicity in terms of AI architecture. Helix had already proven itself in industrial settings, completing an hour of fully autonomous package reorientation on a conveyor belt—flipping and adjusting boxes with speed and reliability.
For laundry, Figure made no changes to the model architecture or training hyperparameters. They simply collected and added a new dataset of laundry-related behaviors. The same general-purpose “brain” transitioned seamlessly from factory logistics to domestic chores.
In the demonstration videos, the Figure 02 humanoid approaches a mixed pile of towels. It picks one at a time, often using its multi-fingered hands to trace edges with a thumb, pinch corners precisely, or unravel tangles. If it accidentally grabs multiple towels (a “multi-pick” error common with soft fabrics), it corrects by separating and returning extras. It adapts its folding strategy based on how the towel lands by smoothing roughly, adjusting folds, and stacking them neatly into a basket.
Helix Powered Robot Seen Neatly Folding Clothes
The Robot also exhibits natural multimodal interaction: it maintains eye contact, directs its gaze helpfully, and uses learned hand gestures while engaging with people. This makes the interaction feel more human-like and less mechanical.
Another video shows the Robot loading Laundry into a washing machine—grasping clothes from a basket, identifying the machine door, and placing items inside with care and precision. These tasks highlight Helix’s ability to handle deformable objects without explicit 3D models or brittle object representations, which often fail for fabrics that bend and shift constantly.
Folding rates in early demos hover around 20-22 seconds per towel, which is slower than a rushed human but impressive for full autonomy. More importantly, the robot generalizes: when the table height was raised by 6 inches mid-task in one test video, it continued without missing a beat, showcasing robustness to environmental changes.
Why Laundry Folding Is So Hard for Robots
To appreciate this achievement, consider the core difficulties of manipulating deformable objects:
1. Infinite Configurations: A towel can land in countless crumpled states. There’s no single “correct” grasp point or predictable physics model.
2. Real-Time Adaptation: The fabric deforms the moment it’s touched. Robots must sense (via vision, since tactile feedback is limited) and adjust finger pressure, tension, and motion on the fly.
3. Fine Dexterity: Tracing an edge, pinching a corner without tearing, smoothing wrinkles, or untangling requires coordinated control of individual fingers, wrists, and even torso posture for better reach and stability.
4. Error Recovery: Dropping an item, grabbing two instead of one, or creating a messy fold demands the robot to recognize the issue and improvise without human intervention.
5. Lack of Rigidity: Unlike picking a mug or box, fabrics have no fixed geometry. Traditional planning algorithms that assume rigid bodies break down.
Helix bypasses many of these by operating end-to-end: raw pixels and language input flow directly to continuous motor commands. No intermediate symbolic representations that could introduce fragility. The model learns implicit understanding of fabric behavior through data, much like humans develop intuition from experience.
This contrasts with earlier robotic laundry attempts, which often used specialized grippers, scripted sequences, or hybrid systems that still required significant human oversight for edge cases.
Broader Implications for Humanoid Robotics
Figure’s demonstration isn’t just about laundry—it’s a proof point for scalable, generalist humanoid intelligence. Humanoids like Figure 02 are designed with human-like form factors: bipedal legs, dexterous hands, and a torso that fits into our built environment (homes, offices, factories) without major modifications.
By showing that one model can handle logistics (structured, repetitive) and household tasks (unstructured, variable) with only new data, Figure highlights a path toward “foundation models” for robotics through large language models in AI by Scaling data collection across more tasks, environments, robots, and capabilities compound.
Potential applications extend far beyond chores:
– Elderly Care and Assistance: Robots that fold clothes, load machines, put away groceries, or assist with daily living for those with mobility issues.
– Warehousing and Manufacturing: Extending from package handling to more delicate or variable items.
– Hospitality and Healthcare: Handling linens, sorting, or light cleaning in dynamic settings.
– Home Integration: Imagine a robot that responds to “Do the Laundry” by loading the washer, later folding, and even putting items away freeing humans from repetitive drudgery.

Figure AI Humanoid Robot Powered by Helix
Figure envisions a future where humanoid robots become ubiquitous assistants, learning new skills rapidly through data rather than reprogramming. CEO Brett Adcock and the team emphasize scaling real-world data as the key to faster, more dexterous, and more generalized performance.
Challenges remain, of course. Current demos are impressive but not yet perfect—folds aren’t always crisp, speed can improve, and full end-to-end home integration (including navigation, safety around humans/pets, and long-horizon planning) requires more work. Energy efficiency, cost, and reliability in messy real-world homes are ongoing hurdles. Ethical questions around job displacement in certain sectors also deserve discussion, though proponents argue robots will augment rather than fully replace human labor in many areas.
The Road Ahead: Scaling Toward Everyday Robots
This laundry milestone builds on Figure’s rapid progress. The company has demonstrated collaborative tasks (two robots storing groceries together), zero-shot picking of thousands of novel household objects, and smooth whole-body control.
As data scales more teleoperation, simulation, or even robot-to-robot teaching is required.
Helix and its successors should improve dramatically. We may soon see robots that not only fold laundry but handle entire cycles: sorting dirty clothes, loading washers/dryers, ironing, folding, and organizing clothes in drawers, all via natural conversation.
Compared to competitors like Tesla’s Optimus (which has shown laundry-related demos but often with less emphasis on full end-to-end neural autonomy in early videos), Figure’s approach with a unified VLA stands out for its data efficiency and generalization.
In a world facing labor shortages in caregiving, manufacturing, and service industries, humanoid robots powered by models like Helix could provide meaningful relief. They won’t replace the joy of human connection or creativity, but they can take over the tedious physical tasks that consume so much time.
A Glimpse of the Robotic Future
The sight of a humanoid robot, guided solely by the Helix VLA model, autonomously loading dirty clothes into a washing machine and folding clothes neatly is more than a cool demo—it’s a window into an emerging era where AI meets physical intelligence. What was once science fiction is becoming engineering reality, one dataset at a time.
Figure AI’s achievement underscores a powerful truth: generalist models, trained end-to-end on diverse real-world behaviors, can bridge the gap between digital understanding and physical action. As these systems evolve, humanoid robots may soon become reliable household companions, transforming how we live, work, and allocate our time.
Laundry washing and folding today. Tomorrow? Countless other tasks that make life easier and more fulfilling. The helix of progress in vision, language, and action is spinning faster than ever—and it’s bringing robots into our homes, one neat fold at a time.
This breakthrough highlights the accelerating pace of robotics. While perfect, human-level performance across all chores is still on the horizon, demos like this show we’re closer than many expected just a few years ago. The combination of advanced hardware (dexterous hands, human-like form) and scalable AI (Helix’s efficient architecture) points to a future where robots handle the mundane so we can focus on what matters most.
Disclaimer!
This publication is made for Educational and awareness purposes. It is not made for the sale of any product or service. The information provided here are based on verified human aided research and studies.







Leave a Reply