Animated Fragment
However, the results lacked sufficient accuracy and fidelity to the original brushwork and aesthetic details.
As a result, this approach was discontinued in favor of manually animating specific elements from the fan painting to preserve its stylistic integrity.
Element Separation in Procreate
Individual visual components from the fan painting were manually extracted and isolated using Procreate. This included tracing and masking plant forms, brushstrokes, and compositional fragments.Motion Composition in After Effects
The separated layers were imported into After Effects, where motion was applied using basic transform functions such as position shift, scale, rotation, and opacity fades. Subtle animations were designed to mimic the rhythm and delicacy of Song Dynasty brushwork.To better capture the depth, spatial rotation, and natural movement of the wings, Blender was used to create the butterfly animation in 3D.
The final workflow was as follows:
Model & Animate: A simple butterfly model was created and animated in Blender to simulate realistic wing motion and flight path.
The animation was rendered as a PNG sequence with transparent background, and then imported into Premiere Pro (PR) and layered over the background animation created in After Effects, integrating seamlessly into the scene.
AI-Generated Surroundings
MidJourney
Image Blending (/blend
)
MidJourney’s
/blend
command was used to merge the original fan painting with photographs of real, full-length plants.This blending process combined the stylistic features of traditional painting with realistic botanical structures. The generated results were then repeatedly re-blended with the original fan image to ensure visual consistency with the source material.
Based on the Full Fan Surface
An image-to-image approach was used by uploading the entire fan painting as a reference.
With only a single image as input, MidJourney generated new variations loosely based on the original round fan composition.
However, the outputs lacked precision and coherence, making them less effective than expected.
Image-to-Image Generation
Based on Cropped Elements from the Fan Painting
An image-to-image approach was applied by uploading selected elements cropped from the original fan painting. These fragments, such as individual flowers or branch structures, served as visual anchors for generation.
MidJourney produced new images inspired by these isolated parts. Some of the results effectively preserved the texture and brushwork of the original elements, and were considered usable for further composition or visual expansion.
Stable Diffusion
Image-to-Image Generation
An image-to-image approach was attempted in Stable Diffusion by uploading the original fan painting as input.
However, the generated outputs deviated significantly from the source image, resulting in unrelated or distorted forms such as animals, toys, and abstract shapes.
This failure was likely due to misconfiguration — in particular, the use of an inappropriate VAE model — which led to outputs that were stylistically incorrect and unusable. (incompatible VAE selection)
Xingtu
Surprisingly, the most visually consistent results came from the image extension function of Xingtu, a non-professional photo editing app.
Although not intended for academic or artistic reconstruction, Xingtu's AI-based outpainting produced extensions that closely matched the style of the original fan painting.
This may be due to the app’s optimization for portrait editing, which prioritizes preserving texture, tone, and stylistic coherence during expansion.
Navigate to other parts: