Animated Fragment


AI Generation RunwayMotion Inaccuracy
The animation process initially began with an experiment using Runway's motion tools to generate dynamic effects directly from the original fan painting.
However, the results lacked sufficient accuracy and fidelity to the original brushwork and aesthetic details.

 As a result, this approach was discontinued in favor of manually animating specific elements from the fan painting to preserve its stylistic integrity.


2D AnimationProcreate + After Effects 

 Element Separation in Procreate

Individual visual components from the fan painting were manually extracted and isolated using Procreate. This included tracing and masking plant forms, brushstrokes, and compositional fragments.

Motion Composition in After Effects

The separated layers were imported into After Effects, where motion was applied using basic transform functions such as position shift, scale, rotation, and opacity fades. Subtle animations were designed to mimic the rhythm and delicacy of Song Dynasty brushwork.




3D AnimationBlender+Premiere Pro

Animating the butterfly’s flight proved to be particularly challenging using 2D tools.

To better capture the depth, spatial rotation, and natural movement of the wings, Blender was used to create the butterfly animation in 3D.

The final workflow was as follows:

Model & Animate: A simple butterfly model was created and animated in Blender to simulate realistic wing motion and flight path.

The animation was rendered as a PNG sequence with transparent background, and then imported into Premiere Pro (PR) and layered over the background animation created in After Effects, integrating seamlessly into the scene.





AI-Generated Surroundings


MidJourney


Image Blending (/blend)


MidJourney’s /blend command was used to merge the original fan painting with photographs of real, full-length plants.
This blending process combined the stylistic features of traditional painting with realistic botanical structures. The generated results were then repeatedly re-blended with the original fan image to ensure visual consistency with the source material.


Image-to-Image Generation
Based on the Full Fan Surface



An image-to-image approach was used by uploading the entire fan painting as a reference.
With only a single image as input, MidJourney generated new variations loosely based on the original round fan composition.
However, the outputs lacked precision and coherence, making them less effective than expected.



Image-to-Image Generation
Based on Cropped Elements from the Fan Painting



An image-to-image approach was applied by uploading selected elements cropped from the original fan painting. These fragments, such as individual flowers or branch structures, served as visual anchors for generation.
MidJourney produced new images inspired by these isolated parts. Some of the results effectively preserved the texture and brushwork of the original elements, and were considered usable for further composition or visual expansion.


Stable Diffusion


Image-to-Image Generation 


An image-to-image approach was attempted in Stable Diffusion by uploading the original fan painting as input.
However, the generated outputs deviated significantly from the source image, resulting in unrelated or distorted forms such as animals, toys, and abstract shapes.
This failure was likely due to misconfiguration — in particular, the use of an inappropriate VAE model — which led to outputs that were stylistically incorrect and unusable.  (incompatible VAE selection)



Xingtu




Image extension function


Surprisingly, the most visually consistent results came from the image extension function of Xingtu, a non-professional photo editing app.
Although not intended for academic or artistic reconstruction, Xingtu's AI-based outpainting produced extensions that closely matched the style of the original fan painting.
This may be due to the app’s optimization for portrait editing, which prioritizes preserving texture, tone, and stylistic coherence during expansion.





Navigate to other parts:

Contact

Instagram

Project Summary

Framing the Seasons is a participatory installation that reanimates classical Song Dynasty fans with contemporary interaction. Each fan holds an NFC tag, becoming a bridge between material artifact and digital imagery. Through tangible gestures, visitors unlock seasonal narratives embedded in tradition.
Project Intention

This work explores how interaction can embody historical aesthetics. Blending algorithmic imagery, embedded electronics, and traditional materials, it investigates how technology can revive ancient rhythms of nature, memory, and cultural flow — quietly sensed, and gently framed.