The process of computational design in mechanical engineering often begins with a problem or a goal, followed by an assessment of literature, resources, and systems available to address the issue.

The Design Computation and Digital Engineering (DeCoDE) Lab at MIT instead explores the bounds of what is possible.

Working with the MIT-IBM Watson AI Lab, the group’s lead, ABS Career Development Assistant Professor Faez Ahmed, and graduate student Amin Heyrani Nobari in the Department of Mechanical Engineering are combining machine learning and generative AI techniques, physical modelling, and engineering principles to tackle design challenges and enhance the creation of mechanical systems.

One of their projects, Linkages, investigates ways planar bars and joints can be connected to trace curved paths. Here, Ahmed and Nobari describe their recent work. 

The Linkages framework learns ways to trace curves. Image courtesy of the authors.

How is your team considering approaching mechanical engineering questions from the standpoint of observations?

Ahmed: The question we have been thinking about is: how can generative AI be used in engineering applications? A key challenge there is incorporating precision into generative AI models.

Now, in the specific work that we have been exploring there, we are using this idea of self-supervised contrastive learning approaches, where effectively we are learning these linkage and curve representations of design, or what the design looks like, and how it works.

This ties very closely with the idea of automated discovery: Can we actually discover new products with AI algorithms? Another comment on the broader picture: one of the key ideas, specifically with linkages, but broadly around generative AI and large language models – all of these are the same family of models that we are looking at, and precision really plays a big role in all of them.

So, the learnings we have from these types of models, where you have, in some form of data-driven learning assisted by engineering simulators and joint embeddings of design, and performance – they can potentially translate to other engineering domains also. What we are showing is a proof of concept. Then people can take it and design ships and aircraft, and precise image generation problems, and so on.

In the case of linkages, your design looks like a set of bars and how they are connected. How it works is basically the path they would transcribe as they move, and we learn these joint representations.

So, there’s your primary input – somebody will come and draw some path – and you’re trying to generate a mechanism that can trace that. That enables us to solve the problem in a much more precise way and significantly faster, at 28 times less error (more accurate) and 20 times faster than prior state-of-the-art approaches. 

Tell me about the linkages method and how it compares to other similar methods.

Nobari: The contrastive learning happens between the mechanisms, which are represented as graphs, so basically, each joint will be a node in a graph and the node will include some features. The features are the position, the space, and the type of joints, it can be that they’re fixed joints or free joints.

We have an architecture that takes into account some of the basic underlying things when it comes to the description of the kinematics of a mechanism, but it’s essentially a graph neural network that computes embeddings for these mechanism graphs. Then, we have another model that takes as inputs these curves and creates an embedding for that, and we connect these two different modalities using contrastive learning.

Then, this contrastive learning framework that we train is used to find new mechanisms, but obviously we care about precision as well. On top of any candidate mechanisms that are identified, we also have an additional optimisation step, where these mechanisms that are identified will be further optimised to get as close as possible to those target curves.

If you’ve got the combinatorial part right, and you’re quite close to where you need to be to get to the target curve that you have, you can do the direct gradient-based optimisation and adjust the position of the joints to get super-precise performance on it. That’s a very important aspect of it to work.

These are the examples of the letters of alphabet, but these are very hard to achieve traditionally with existing methods. Other machine learning based methods are often not even able to do this kind of thing because they are only trained on four bars or six bars, which are very small mechanisms. But what we’ve been able to show is that even with relatively small number of joints, you can get very close to those curves.

Before this, we didn’t know what the limits of design capabilities were with a single linkage mechanism. It’s a very hard question to know. Can you really write the letter M, right? No one has ever done that, and the mechanism is so complex and so rare that it’s finding a needle in the haystack. But with this method, we show that it is possible.

We’ve looked into using off-the-shelf generative models for graphs. Generally, generative models for graphs are very difficult to train, and they’re usually not very effective, especially when it comes to mixing continuous variables that have very high sensitivity to what the actual kinematics of a mechanism will be. At the same time, you have all these different ways of combining joints and linkages. These models simply just cannot generate effectively.

The complexity of the problem, I think, is more obvious when you look at how people approach it with optimisation. With optimisation, this becomes a mixed-integer, non-linear problem.

Using some simple bi-level optimisations or even simplifying the problem down, they basically create approximations of all the functions, so that they can use mixed-integer conic programming to approach the problem.

The combinatorial space combined with the continuous space is so big that they can basically go up to seven joints. Beyond that, it becomes extremely difficult, and it takes two days to create one mechanism for one specific target.

If you were to do this exhaustively, it would be very difficult to actually cover the entire design space. This is where you can’t just throw deep learning at it without trying to be a little more clever about how you do that.

The state-of-the-art deep learning-based approaches use reinforcement learning. They – given a target curve – start building these mechanisms more or less randomly, basically a Monte Carlo optimisation type of approach.

The measure for this is directly comparing the curve that a mechanism traces and the target curves that are input to the model, and we show that our model performs like 28 times better than that. It’s 75 seconds for our approach, and the reinforcement learning-based approach takes 45 minutes. The optimisation approach, you run it for more than 24 hours, and it doesn’t converge.

I think we have reached the point where we have a very robust proof of concept with the linkage mechanisms. It’s a complicated enough problem that we can see conventional optimisation and conventional deep learning alone are not enough.

What’s the bigger picture behind the need to develop techniques like linkages that allow for the future of human-AI co-design?

Ahmed: The most obvious one is design of machines and mechanical systems, which is what we've already shown. Having said that, I think a key contribution of this work is that it’s a discrete and continuous space that we are learning.

So, if you think about the linkages that are out there and how the linkages are connected to each other, that’s a discrete space. Either you are connected or not connected: 0 and 1, but where each node is, is a continuous space that can vary – you can be anywhere in the space.

Learning for these discrete and continuous spaces is an extremely challenging problem. Most of the machine learning we see, like in computer vision, it’s only continuous, or language is mostly discrete.

By showing this discrete and continuous system, I think the key idea generalises to many engineering applications from meta-materials to complex networks, to other types of structures, and so on.

There are steps that we are thinking about immediately, and a natural question is around more complex mechanical systems and more physics, like, you start adding different forms of elastic behaviour. Then, you can also think about different types of components.

We are also thinking about how precision in large language models can be incorporated, and some of the learnings will transfer there. We’re thinking about making these models generative.

Right now, they are, in some sense, retrieving mechanisms and then optimising from a dataset, while generative models will generate these methods. We are also exploring that end-to-end learning, where the optimisation is not needed.

Nobari: There are a few places in mechanical engineering where they’re used, and there’s very common applications of systems for this kind of inverse kinematic synthesis, where this would be useful.

A couple of those that come into mind are, for example, in car suspension systems, where you want a specific motion path for your overall suspension mechanism. Usually, they model that in 2D with planner models of the overall suspension mechanism.

I think that the next step, and what is ultimately going to be very useful, is demonstrating the same framework or a similar framework for other complicated problems that involve combinatory and continuous values.

These problems include one of the things that I’ve been looking into: compliant mechanisms. For example, when you have the mechanics of continual – instead of these discrete – rigid linkages, you would have a distribution of materials and motion, and one part of the material deforms the rest of the material to give you a different kind of motion.

With compliant mechanisms, there’s a bunch of different places they’re used, sometimes in precision machines for fixture mechanisms, where you want a specific piece that is held in place, using a mechanism that fixtures it, which can do it consistently and with very high precision. If you could automate a lot of that with this kind of framework, it would be very useful.

These are all difficult problems that involve both combinatorial design variables and continuous design variables. I think that we are very close to that, and ultimately that will be the final stage.