Introducing Intelligent Reframing for vertical video clipping and publishing
Scroll through any social feed and the shift in media is impossible to ignore: video has broken free from the horizontal ‘TV’ frame.
A single live moment, a last-minute goal, a breaking news alert, a red-carpet interview, now exists in a state of constant flux. It is a broadcast clip on a TV, a square highlight on a tweet or post, and a vertical story in a social reel. The same content, reshaped to meet audiences wherever they are watching.
Over the past few months, Grabyo has been expanding the tools that eliminate this friction, from AI-powered vertical video production with AWS Elemental Inference to real-time AI integrations such as Open Captions in Live Clipping.
Today we’re introducing another step forward: intelligent reframing tools for clipping and editing. By combining data-driven keyframing and fine-tuned editing with AI-based optical tracking into the clipping interface, we’ve built the tools to help teams produce accurate, high-quality vertical video in seconds using Grabyo.
Bringing vertical editing into the clipping workflow
For many teams, producing vertical video still involves slow processes and multiple tools, both on the desktop and in the cloud.
A video clip is created from a live stream, downloaded, reframed in another editing tool (such as Adobe Premiere or AVID), and then exported again before it can be published. This provides editorial control, but slows down workflows for content that needs to be published at the speed of live. Grabyo’s new reframing toolkit brings that process directly into the platform.
Users can convert horizontal clips into alternative formats such as 9:16 or 1:1 without leaving the clipping tools within Grabyo Studio. This reduces the need for complex third-party editing tools and keeps the entire publishing workflow in one cloud platform – available to multiple users at the same time.
The result? A much faster workflow from live action to multi-platform distribution.
Motion that feels natural on mobile
Reframing video is not simply about cropping the image.
The real challenge is movement.
When the subject of a clip shifts across the frame, the vertical window needs to follow that action smoothly. In live sport, the movement is rarely predictable. A player breaks forward, the ball changes direction, or the focus of the moment shifts in an instant. In news or interview formats, the framing can move back and forth between speakers.
In both cases, poorly handled reframing can feel jittery, abrupt or unnatural, which quickly becomes noticeable on mobile devices.
To address this, Grabyo’s reframing tools use data-driven keyframing and smoothing techniques to guide the motion of the frame across the video.
Instead of manually adjusting every frame, the system defines key points in the video and automatically calculates the movement between them, ensuring transitions feel controlled and fluid.
For viewers, the result is a vertical video that feels stable and natural. For editors, it means far less manual work.
A flexible toolkit for different workflows
The reframing tools are designed to support different levels of editorial control depending on the situation.
Some clips may require quick adjustments to prepare them for social publishing. Others may need more careful framing to follow a subject or moment precisely.
To support these different workflows, the system offers a combination of:
- AI-Assisted Tracking: Let the system identify and follow the primary action automatically.
- Performance Tracking: Use your mouse to “track” the action in real-time as you watch the clip.
- Manual Keyframe Editing: Add key points in the timeline for pixel-perfect adjustments.
- Motion Easing: Apply professional smoothing to transitions so the horizontal or vertical frame starts and stops with natural momentum.
Each approach feeds into the same underlying keyframe model, allowing users to refine the framing of a clip when needed while still maintaining a simple, fast editing process.
This balance between automation and control is key. The aim is not to recreate a full video editing suite, but to provide the tools needed to produce high-quality vertical clips quickly.
Building the editorial toolkit for modern video production
The introduction of data-driven reframing isn’t just a feature update; it’s part of our broader vision for distributed, cloud-native video production.
Recent updates like Open Captions in Live Clipping have focused on simplifying social video workflows. Meanwhile, AI-powered production capabilities such as Elemental Inference are helping teams generate vertical outputs earlier in the production pipeline.
Together, these developments are building a more complete environment for creating and publishing content across platforms.
In a mobile, social world, capturing the clip is table stakes. The real competitive edge is the speed at which you can transform those moments into an authentic, high-fidelity experience for every viewer on every screen.