To deliver the functionality described above, our team deployed the following technologies.
Back end
The client’s platform relies on a microservices architecture, so we built the entire Stories capability around AWS-native components. Coordinating all the different microservices among each other and with the user’s device was a challenge: stories expire after 24 hours; creators delete individual stories; users remove their entire accounts. Each of these events must immediately reflect on the viewer’s device so users never see expired or removed content in their story marquee.
Another difficulty was ensuring that users could resume a story exactly where they left off. The Stories feature had to survive interruptions—phone calls, backgrounding, low battery—while also accounting for constantly changing story availability.
Yet another challenge was in implementing reactions. The reaction integration service had to be as abstract and general as possible to support configuring new types of reactions in the future. Our team also introduced a notification center to maintain reaction history. It was also designed to support more diverse notification types in the future.
To maintain speed and responsiveness, we created a dual-layer storage and synchronization approach that provides real-time updates while maintaining reliable long-term records.
Architecture:
We used the following AWS services:
●
AWS Elemental MediaConvert to process uploaded videos and convert them into Apple HLS format for faster and smoother streaming
●
AWS S3 and CloudFront to store and deliver stories with low latency
●
AWS DynamoDB as the primary database for story metadata, reactions, user states, and more
●
Redis for short-term storage to speed up response time and reduce DynamoDB load
●
Node.js for microservices to orchestrate all interactions
●
HTTP and AMQP for efficient inter-service communication
●
WebSocket protocol to handle communication between client and server, such as delivering real-time updates
Mobile client
The front-end development pushed far beyond standard
mobile development. To offer a rich media and video-processing experience, our engineers implemented custom algorithms, precise mathematical manipulations, and careful optimization to keep performance smooth on devices with limited memory and power.
The mobile environment added more complexity, as any app can be interrupted at any moment by calls, notifications, or backgrounding. Yet story creation, processing, and upload still had to remain reliable under all conditions.
The interface itself introduced another challenge. Users interact with stories through a wide range of gestures, and the app needed to interpret every one of them without lag. We had to prepare for a multitude of edge cases: rewinding while a finger remains on the screen, jumping between stories, switching creators mid-gesture, and rapidly combining multiple actions.
Architecture:
●
OpenGL and Android Media3 with custom matrix transformation on the phone’s graphic adapter to streamline content creation and video rendering
●
WorkManager framework to reliably render and upload media data in the background given the restrictions of mobile devices
●
Combine framework for handling complex, asynchronous events
●
ExoPlayer to ensure stable, high-quality playback across a wide range of devices and network conditions
●
Canvas api to support complex layered editing and media transformation
Swift Package Manager for handling external libraries and dependencies
●
Kotlin, Swift, Objective-C as programming languages