In recent years, the intersection of artificial intelligence (AI) and material science has opened up new possibilities for understanding complex material behaviours. One such emerging tool is the Physics-Informed Neural Network (PINN), a type of AI model that blends the mathematical precision of physics with the adaptive learning capability of neural networks. While traditional AI models often require vast amounts of data and sometimes ignore physical laws, PINNs uniquely embed these laws into the learning process, making them ideal for applications where data is limited but physics is well-understood.
One key application where PINNs have shown immense promise is material characterisation, the process of identifying and understanding the properties of materials such as metals, polymers, ceramics, or composites. This blog walks you through what PINNs are, why they matter for materials science, and how they are developed step-by-step in a framework suited for characterising material behaviour. Whether you're a student, researcher, or industry professional, this article aims to break down complex AI concepts into understandable parts.
Fig 1. Schematic representation of identifying nonuniform mechanical properties using cv-PINN.
(Liu, C. and Wu, H. (2023) cv-PINN: Efficient learning of variational physics-informed neural network with domain decomposition. Extreme Mechanics Letters. [Online] 63102051. Available at: doi:10.1016/j.eml.2023.102051.)
Before we dive into the development framework, it's important to understand what makes PINNs different. A neural network is a computer algorithm inspired by the human brain that can learn patterns from data. However, in many scientific applications, we also have governing equations, like Newton's laws, thermodynamics, or equations of elasticity. A PINN is trained not only to fit the data but also to obey these equations.
So rather than needing millions of data points, PINNs can learn from fewer measurements and still make accurate predictions because they are “guided” by physics. This makes them especially useful in material characterisation, where experimental data is often sparse, expensive, or difficult to obtain.
Material characterisation involves understanding mechanical properties such as strength, elasticity, fracture toughness, and thermal properties. Conventionally, this is done through experimental testing or computational simulations (e.g., finite element analysis). However, these methods can be time-consuming, expensive, and not easily scalable. By using PINNs, one can infer these properties more efficiently, especially when dealing with complex behaviours such as plasticity, anisotropy, or microstructural interactions. Moreover, they can be applied to inverse problems, where we try to determine unknown material properties based on how the material responds under certain conditions.
The development of a PINN for material characterisation typically involves five detailed steps. Let us explore each step in depth.
1. Formulation of the Governing Equations
The first step is to clearly define the physical laws that govern the behaviour of the material. This involves writing down the equations that describe the system, such as:
Stress-strain relations (Hooke's law for linear elasticity)
Equations of motion or equilibrium
Heat conduction equations (for thermal characterisation)
Constitutive models (for plastic or viscoelastic materials)
These equations are often in the form of partial differential equations (PDEs). For example, in the case of an elastic material, the equilibrium equation ∇·σ + b = 0 (where σ is the stress tensor and b is the body force) is one such governing law. These equations form the “physics” part of the PINN and will guide the learning process of the neural network. This stage requires a solid understanding of the material mechanics involved, as the quality of the model depends heavily on how accurately the physics is represented.
2. Design of the Neural Network Architecture
Once the physics is defined, the next step is to build a neural network capable of learning the material behaviour. The input to the network typically consists of spatial and temporal coordinates (e.g., x, y, z, t), while the outputs could be field quantities like displacement, strain, temperature, or stress. The architecture generally involves multiple layers of interconnected nodes (neurons), and the complexity of the model depends on the problem at hand. For simpler linear material behaviour, a shallow network may suffice. For more complex, nonlinear phenomena like plastic deformation or crack propagation, deeper and more sophisticated architectures may be required.
A key point here is that the network must be differentiable, as PINNs require the computation of derivatives to match the PDEs. This is made possible by automatic differentiation, a feature supported in AI libraries such as TensorFlow or PyTorch.
3. Loss Function Construction with Embedded Physics
In traditional neural networks, the loss function measures how far the network’s predictions are from the known data. In PINNs, however, the loss function is composed of two parts:
Data Loss: The difference between the neural network output and the actual experimental data (e.g., measured displacements or temperatures).
Physics Loss: The error in satisfying the governing PDEs. This is evaluated by substituting the network’s predictions into the differential equations and computing how much they deviate from zero.
For example, if the PDE is ∇·σ + b = 0, and the network predicts displacements u(x), the stress σ can be derived, and the residual ∇·σ + b is computed. The network is then trained to minimise this residual. By balancing these two losses, the PINN not only fits the data but also ensures that its outputs are physically consistent. This makes it a much more robust and interpretable model.
4. Training the Network with Initial and Boundary Conditions
Once the loss function is set up, the network is trained using optimisation algorithms. This involves feeding in known boundary and initial conditions. For instance, if we’re modelling a metal beam under load, the supports and applied loads are known quantities and are incorporated into the training data. Since PINNs solve PDEs, they require that the network respects these constraints. During training, the network continuously adjusts its internal weights so that it both satisfies the boundary/initial conditions and reduces the physics and data losses. An interesting aspect of PINNs is that even with no experimental data inside the domain, the network can predict behaviour throughout the material as long as the physics and boundaries are correctly defined. This is extremely useful in scenarios where sensors or instruments cannot access certain regions of the material.
5. Validation and Interpretation of Results
After training, the network’s predictions must be validated against known results, either from experiments or simulations. This step ensures that the model is accurate and generalises well to new conditions. In material characterisation, this could mean comparing predicted stress-strain curves with actual tensile test results, or comparing temperature profiles in a heat-treated alloy with experimental thermographic images. Once validated, the PINN model can be used to infer unknown properties. For instance, in an inverse problem, if you apply a known load and observe the resulting displacements, the PINN can help estimate the underlying Young’s modulus or Poisson’s ratio of the material. Beyond prediction, PINNs can also offer insights into the internal states of the material (e.g., stress distribution, damage zones), which might be difficult to obtain from experiments alone.
Fig 2. (A) The ground truth, (B) the prediction and (C) the point-wise error of the displacement field ux, (C) the convergence history of the two lamé parameters λ, μ.
(Liu, C. and Wu, H. (2023) cv-PINN: Efficient learning of variational physics-informed neural network with domain decomposition. Extreme Mechanics Letters. [Online] 63102051. Available at: doi:10.1016/j.eml.2023.102051.)
PINNs are being explored for a wide range of material characterisation tasks, including:
Predicting mechanical properties of composites and biomaterials
Inferring damage evolution in cracked structures
Characterising heat conduction in multi-layered materials
Modelling diffusion and chemical reactions in battery materials
Estimating viscoelastic parameters of soft tissues
Their ability to work with small datasets and incorporate well-established physics makes them a powerful tool in both academia and industry.
The development of Physics-Informed Neural Networks for material characterisation represents a significant advancement in how we approach complex material behaviour. By bridging the gap between data-driven AI and physics-based modelling, PINNs offer a more efficient, interpretable, and scalable method for understanding materials. Although they are still an emerging technology, the potential of PINNs is vast. As computational power increases and tools become more user-friendly, we can expect to see wider adoption across materials research, manufacturing, aerospace, biomedical engineering, and more.
In the future, PINNs could become standard tools in digital twins of materials, enabling real-time monitoring, control, and optimisation of engineering systems based on precise material behaviour. Whether you are a material scientist looking to speed up characterisation, or an AI enthusiast aiming to work on impactful problems, PINNs provide an exciting and rich area for exploration.