Generative AI is a hot topic of the moment, and we all know that everything AI can do requires a process of learning from the data we provided.
Let’s start to think about AI application in clothing pattern generation, and we will realize a very serious and logically important issue.
The ultimate forms of both virtual clothing and physical clothing are 3D. And the common source is clothing pattern data.
If the pattern data provided for AI learning is generated from 2D CAD drawing software, then the source data is not generated in 3D state, it is not logical that AIGC can play a role in this situation.
All kinds of existing picture generation, can only be skin, and can not achieve the real meaning of DX❗️
2D and 3D are completely different dimensions, so no matter how AI learns, it can not make perfect transformation.
AI need to learn that a combination of 2D patterns can be generated with different shapes based on cutting lines on the same 3D garment model.
If there were no massive pattern data generated like our method to provide for AI learning, there would never have been an AI product that could directly restore and convert a picture of a garment worn on people into pattern sets that could be produced automatically or semi-automatically❗
According to the content of an interview with Professor Geoffrey Hinton at MIT Technology Review on May 3, soon AI will be able to conduct thought experiments.
Will the AI refuse to accept pattern data generated in 2D CAD drawing software for further processing❓🧐🧐
#design #digital #digitalfashion #digitaltwin #3dmodeling #sustainability #apparelindustry #metaverse #fashiondesign #AIinfashion #AIGC #business