The Grabyo Glossary
Get more information on key terms around cloud video production and distribution.
Tip: Try using the Command⌘+ F (Apple MacOS) or Control (ctrl) + F keyboard shortcut to search for terms on this page.
- AFL, PFL and SIP →
AFL: AFL stands for “After-Fader Listen” or “After-Fader Listening.” In audio production, it refers to a monitoring mode where the audio signal is heard after it passes through the fader on the mixing console. When AFL is engaged for a specific channel, the audio signal from that channel is isolated and can be monitored independently of the main mix.
PFL: PFL stands for “Pre-Fader Listen” or “Pre-Fader Listening.” Similar to AFL, it is a monitoring mode used in audio production. When PFL is engaged for a specific channel, the audio signal from that channel is monitored before it reaches the fader, allowing engineers to listen to the raw signal without any adjustments made by the fader position.
SIP: SIP stands for “Solo in place”. In audio mixing, this refers to a feature or action that isolates and plays back a specific audio track or channel within a mix while keeping the channel’s original position within the stereo or surround sound field. This allows the audio engineer to focus on the individual nuances, characteristics, and balance of the selected track without affecting the overall mix.
Read more: Broadcast-grade audio mixing in the cloud
- AI Clipping → AI-powered live clipping simplifies capturing and sharing key moments from live events. AI technology automatically identifies and creates short video clips from broadcasts, detecting significant events in real-time. These clips can be quickly shared on social media, websites, and other platforms.
- Alternate Broadcasting → Alternate broadcasting refers to adding or providing alternative streams of content alongside the main broadcast of a live sports event. This approach enhances the viewing experience by making it more immersive, personalized, and engaging, appealing to a broader audience. Examples include behind-the-scenes content, fan-centric broadcasts, and second-screen experiences.
Read more: Alternate broadcasting explained
- API → API stands for Application Programming Interface. It’s a set of rules that allows different software applications to communicate and interact with each other. APIs define how software components should interact, making it easier for developers to use existing services and functionality without having to understand the underlying code.
- Aspect Ratio → Aspect ratio refers to the proportional relationship between the width and height of an image or video frame. It is represented as a ratio of two numbers, typically separated by a colon (e.g., 16:9). Aspect ratios are essential for determining the shape and dimensions of visual content, ensuring proper display on various screens and devices.
- ASR → ASR in video production stands for “Automatic Speech Recognition.” It refers to the technology and process of converting spoken language or audio speech into written text automatically and accurately. ASR systems utilize advanced algorithms and machine learning techniques to analyze audio input, recognize spoken words, and transcribe them into textual form.
Read more: Automated captions for video editing
- Asset Snapping → Asset snapping, also known as snapping or magnetic snapping, is a feature in video editing that automatically aligns or “snaps” video clips, images, or other assets to specific points or markers on the timeline. This ensures precise alignment and positioning of assets relative to each other or specific reference points, simplifying the editing process and enabling smooth transitions between different elements in the video.
Read more: Snapping and asset positioning for live broadcast layouts
- Audio Compression → Audio compression is a process used to reduce the size of audio files by removing or reducing parts of the audio signal, allowing for more efficient storage and transmission of audio data.
Read more: Advanced audio: Multi-track output and audio compression
- Audio Mixing → In broadcasting, audio mixing is performed to ensure that various audio elements, such as voices, background music, sound effects, and ambient sounds, are properly blended and synchronised. The goal is to deliver a balanced and immersive audio presentation that enhances the viewing experience.
Broadcast audio mixers or engineers use specialised audio consoles or digital audio workstations (DAWs) to manipulate and control audio signals in real-time.
Read more: Audio mixing in a web browser
- Automated Transcription → Automated transcription (also see ASR) is the process of automatically converting spoken language from audio or video files into written text without human intervention. It is capable of transcribing content across multiple languages. This technology offers several advantages, including faster transcription times, cost-effectiveness, and the ability to quickly transcribe large volumes of content.
Read more: Collaborative video editing in the cloud
- Automated Transcription → Automated transcription (also see ASR) is the process of automatically converting spoken language from audio or video files into written text without human intervention. It is capable of transcribing content across multiple languages. This technology offers several advantages, including faster transcription times, cost-effectiveness, and the ability to quickly transcribe large volumes of content.
Read more: Collaborative video editing in the cloud
- AV Decoding → AV decoding, short for Audio-Video decoding, refers to the process of converting compressed audio and video data into a format that can be played back or displayed by a multimedia player or application. The decoding process ensures that users can enjoy high-quality audio and video playback, and it is a crucial aspect of the overall multimedia experience.
Read more: Custom A/V decoding
- AWS → AWS stands for “Amazon Web Services.” It is a comprehensive and widely-used cloud computing platform provided by Amazon. AWS has a global network of data centers, allowing platforms built on its technology to select from various geographic regions to host their applications and data, ensuring low-latency access and redundancy to end users.
Read more: AWS for video production platforms
- Behind the Scenes → Behind-the-scenes content, often abbreviated as BTS content, refers to supplemental material or exclusive footage that offers viewers a glimpse into the production process of a sport, television show, music video, or other creative projects. This type of content provides an insider’s perspective, showcasing what happens “behind the scenes” during the making of the main production.
Read more: UFC brings fans backstage at fight nights
- Browser Based → Browser-based video production is the process of creating, editing, and producing video content entirely within a web browser, eliminating the need for installed software or traditional video editing tools.
- CDN → CDN stands for Content Delivery Network. It is a network of distributed servers or data centres strategically located across different geographical locations. The primary purpose of a CDN is to deliver web content, such as images, videos, scripts, and other static or dynamic assets, to end-users more efficiently and quickly.
- Clean / Dirty feeds → A clean feed, also known as a program feed or a direct feed, is the unaltered, primary version of a live video broadcast. It contains the raw video and audio content without any overlays, graphics, or additional elements. Clean feeds are typically sent directly from the source, such as a studio or an outside broadcast (OB) van, to other broadcasters or distribution networks.
A dirty feed, on the other hand, is the version of the live video broadcast that includes additional elements, such as graphics, logos, subtitles, advertisements, or other overlays. The dirty feed is the final version that viewers at home see on their television screens.
- Cloud Computing → Cloud computing has become a fundamental technology for individuals, businesses, and organisations, offering flexibility, scalability, cost efficiency, and accessibility to a wide range of computing resources and services. It refers to the delivery of various computing services, including storage, processing power, software, databases, networking, and more, over the internet (“the cloud”), instead of relying on local servers or physical infrastructure.
- Cloud Video Production → Cloud video production tools can be accessed anywhere, at any time, on almost any device, allowing your video teams to work from the most optimal location for them, and collaborate effectively across the world.
Grabyo is the leading cloud video platform, combining broadcast-grade production tools with the flexibility of the cloud.
Read more: Cloud video production with Grabyo
- Collaboration Software → Collaboration software enables smooth coordination, communication, and collaboration among team members involved in live broadcast production. It facilitates seamless collaboration, ensuring efficient workflows and successful execution of live events.
These tools support remote productions, allowing team members to collaborate from different locations. They provide remote access to production elements, enabling effective communication and coordination, even when team members are geographically dispersed.
Read more: Collaborative live production in the cloud
- Content Distribution → Content distribution is the process of delivering and sharing live video content with a broad audience across multiple platforms and channels. It involves transmitting the live video feed to viewers in real-time, enabling them to access and watch the content as it unfolds. This distribution can take place through streaming platforms, broadcasting channels, online platforms (including social media), content syndication, and various mobile and OTT devices.
- Content Ingest → Video ingest refers to the process of capturing or importing video content into a digital system or workflow for further processing, storage, or distribution. It involves transferring video files or streams from a source device or media to a designated destination, typically a computer or server.
- Control Room → A control room, sometimes referred to as a production control room, is a physical studio space that houses all of the live production equipment and personnel for live broadcasting. Cloud-based control rooms are collaborative, browser-based virtual workspaces that encompasses all of these live vision mixing and distribution tools.
Read more: Grabyo’s cloud-based control room
- Custom Hardware Integrations → Hardware integrations refer to the process of connecting and incorporating physical hardware devices with cloud-based video production services. These integrations enable the seamless interaction between cloud-based software and on-premise hardware, enhancing the capabilities and efficiency of video production workflows.
Read more: Integrating hardware with cloud production
- Detachable Monitors → Detachable monitors are unique to video production software. Within the Grabyo production environment, detachable monitors are program windows that can be popped out and viewed independently from the main production area. This feature allows the Producer Control Room to extend across multiple screens, providing a more flexible and customizable workspace for video production.
- Digital live show → A digital live show, also known as a virtual live show or online live show, refers to a live performance, event, or broadcast that takes place in a digital or virtual environment rather than in a physical venue. In a digital live show, performers, speakers, or presenters deliver their content remotely, and the audience participates and engages through online platforms, streaming services, or virtual event spaces.
Read more: How to create a digital live show
- Digital News Coverage → Digital news coverage refers to the reporting, dissemination, and distribution of news and current events through digital platforms and technologies. It encompasses the use of various digital mediums, such as websites, mobile apps, social media, podcasts, video streaming, and email newsletters, to deliver timely and relevant news to a wide audience.
Read more: Delivering digital news coverage
- Digital-First Sponsorship → Digital-first sponsorship refers to a marketing strategy in which sponsors prioritise and focus their advertising and promotional efforts on digital platforms and channels before considering traditional or offline media. In this approach, sponsors seek to maximise the visibility and impact of their brand by leveraging the capabilities of digital technologies and online channels to reach their target audience effectively.
- Direct to Consumer → Direct-to-consumer (DTC or D2C) refers to a distribution model where video content creators or production companies bypass traditional broadcast networks or media platforms and deliver their video content directly to the end-users or consumers. Instead of relying on third-party distributors or broadcasters, DTC video producers use digital platforms, websites, streaming services, or social media channels to reach their target audience directly.
Read more:nD2C streaming for sports in Europe
- End-to-End Video Streaming → End-to-end video streaming refers to the process of delivering video content seamlessly from the source to the viewer’s screen, encompassing all the stages and components involved in the streaming workflow. In this context, “end-to-end” emphasises the complete journey of the video, from its origination at the content source to its final display and playback on the viewer’s device.
- Facial Detection → Facial detection refers to the automated process of identifying and locating human faces within a live video feed in real-time. This technology uses computer vision algorithms and artificial intelligence to detect facial features and distinguish human faces from other objects or backgrounds in the video stream.
Read more: Facial recognition in AI biometrics
- FAST → FAST platforms (Free Ad Supported TV) is a digital media distribution framework that provides viewers with free access to a diverse range of ‘TV channels’. These channels broadcast video content, such as live events, TV shows and movies, with all revenue generated by advertising. There are multiple FAST networks across the globe. Media organizations can set up their own FAST channel and purchase a slot in one of these networks.
- Feeds → Video feeds (or streams) refer to the live or recorded streams of video content transmitted or delivered from a source to a destination. These feeds can be generated by various devices, such as cameras, video capture systems, or streaming equipment, and are typically transmitted over networks or the internet to be viewed by audiences on different devices, such as computers, smartphones, or television screens.
- Frame Per Second / FPS → FPS stands for “frames per second.” It is a measurement used in video and animation to describe the number of individual frames displayed or recorded each second. In digital video, each frame is a complete image that, when played in quick succession, creates the illusion of motion.
- Frame Rate Upconversion (FRUC) → Frame rate upconversion, also known as frame rate interpolation or frame rate conversion, is a video processing technique used to increase the frame rate of a video by generating intermediate frames between the original frames. The purpose of frame rate upconversion is to improve the smoothness and motion quality of the video, especially when converting content from a lower frame rate to a higher frame rate.
- FTP → FTP destinations refer to remote server locations or storage locations that can be accessed and used for file transfer via the File Transfer Protocol (FTP). FTP is a standard network protocol used to transfer files from one host or computer to another over a TCP-based network, such as the internet or a local intranet.
- Generative AI → Generative AI, short for Generative Artificial Intelligence, is a subset of artificial intelligence that focuses on creating new and original content. Unlike traditional AI models that are used for specific tasks like classification or prediction, generative AI is designed to produce data, such as images, videos, audio, or text, that is indistinguishable from human-generated content.
- Graphic Transitions → Graphic transitions are utilised in production to seamlessly and aesthetically transition between different elements or scenes in video content. They play a crucial role in enhancing visual continuity, amplifying narrative or visual impact, adding a polished touch, and emphasising key points or information.
In production, common graphic transitions include fades (fade in, fade out), dissolves, wipes, slides, and morphs. These transitions significantly contribute to improving the visual flow, storytelling, and overall production value of video content.
Read more: Using graphic transitions
- Hardware Support → In cloud production, hardware support refers to the integration and utilization of physical hardware components to enhance and facilitate various aspects of video production.
Read more: Hardware Support explained
- HLS → HLS (HTTP Live Streaming) is a streaming protocol developed by Apple Inc. that enables the delivery of video and audio content over the internet. HLS breaks video files into smaller chunks and uses HTTP (Hypertext Transfer Protocol) to deliver these chunks to viewers, allowing adaptive bitrate streaming.
- Hybrid Events → Hybrid events, also known as hybrid conferences, are events that combine both in-person and virtual elements, allowing participants to attend and engage either in-person at a physical venue or remotely through virtual platforms.
Read more: A guide to producing hybrid events
- Hybrid Production → Hybrid production refers to a video production approach that combines elements of both traditional in-person production and remote or virtual production methods. In a hybrid production, some aspects of the production process are conducted on-site with physical presence, while others are handled remotely using digital technologies and virtual tools.
Read more: Asiaworks produce financial conference in hybrid workflow
- Ingest → Video ingest, in the context of video production, refers to the process of capturing or importing video content from various sources and formats into a centralised system or platform for further editing, processing, or distribution. Video ingestion is a critical step in the production workflow, as it allows video content to be organised, managed, and prepared for post-production tasks.
- Ingest Tracking → Video ingest tracking refers to the process of monitoring and recording the status, progress, and details of video content as it is being captured, imported, or ingested into a centralised system or platform. The tracking system provides real-time visibility and management of the ingest process, allowing production teams to monitor the flow of video content from various sources to the designated storage or editing environment.
Read more: Monitoring ingest health
- Instant Replay → Instant Replay is a video technology used in sports broadcasting and other live events to replay and review key moments or plays during a match or performance. It allows broadcasters and officials to show selected video clips to the audience in near real-time, enabling viewers to see critical actions or incidents from different angles and at a slower speed for better analysis.
Read more: Creating instant replays in the cloud
- Interactive Positioning → Interactive positioning, in the context of video production and virtual environments, refers to the ability to dynamically and interactively position virtual objects, elements, or characters within a production space with instant feedback and display. This technology allows content creators, designers, or users to manipulate and control the placement, orientation, and movement of virtual assets.
Read more: Interactive positioning for live broadcasts
- ISO Recording → ISO recording, short for isolated recording, allows you to create individual recordings of each camera input or video source in live video production. While the main program output is typically recorded for distribution purposes, ISO recording provides the flexibility to independently capture each camera feed. This capability grants more freedom and flexibility during post-production, enabling enhanced editing and manipulation of the individual camera recordings.
Read more: How to use ISO recording
- Key and Fill → Key and fill graphics, also known as key and fill animations or keying, are a technique used in video production and broadcasting to overlay graphics or images onto video footage. This process involves combining two video signals: the key signal (foreground) and the fill signal (background) to create a composite image. The key signal contains the graphics or images with transparency information, while the fill signal serves as the background onto which the graphics are placed.
Read more: Using key and fill graphics
- Key bindings → Key bindings, also known as keyboard shortcuts or hotkeys, refer to predefined combinations of keyboard keys that are assigned to perform specific actions or functions within a software application or operating system. When a user presses the designated key combination, the corresponding action is executed without the need to navigate through menus or use a mouse.
Read more: Key bindings in live production
- Lazy Talkback → Lazy talkback, in the context of audio production and broadcasting, refers to a communication system or setup that allows individuals, such as audio engineers or directors, to have constant, hands-free communication with other team members during a live broadcast or recording session. The term “lazy” does not imply a lack of effort, but rather indicates the convenience and ease of use of the talkback system.
Read more: Remote commentary and talkback
- Live clipping → Live video clipping, also known as real-time video clipping, refers to the process of selecting and extracting specific segments or highlights from a live video stream while the event is still ongoing. This allows content creators, broadcasters, or social media teams to quickly create and share short clips or highlights of the live event as they are happening.
Read more: Clipping from live feeds
- Live Event Production → Live event video production refers to the process of capturing, recording, and broadcasting video content in real-time during live events, such as concerts, conferences, sports matches, theatrical performances, ceremonies, and other gatherings. Some of the key aspects of these productions include: multi-camera setups, video switching, live streaming, graphics and overlays, audio integration and instant replays.
Read more: Producing live events
- Localised Clipping → Localised video clipping, also known as regional video clipping or geo-targeted video clipping, refers to the process of selecting and extracting specific segments or highlights from a video content library to cater to specific local or regional audiences. This technique involves creating clips that are relevant and appealing to viewers in specific geographic locations, taking into account cultural, linguistic, or regional preferences.
Read more: Localizing video with multi-track audio
- Logo Detection → Logo detection, also known as logo recognition or logo tracking, is a technology used in video analysis and content identification to automatically detect and recognize logos or trademarks that appear within video content. This process involves analysing video frames, identifying logo patterns, and matching them against a database of known logos to determine the presence and location of specific logos within the video.
Read more: AI vision with logo detection
- Low-latency → Low latency in the context of video production refers to the minimal delay or lag between the time an action or event occurs and the moment it is captured, processed, transmitted, and displayed on a video screen or monitor. In other words, low latency ensures that there is little to no noticeable delay between the real-time event and its representation in the video output.
- Magazine Shows → Digital magazine shows refer to a type of online video content that follows a format similar to traditional magazine-style programs but is exclusively produced and distributed in a digital format. These shows typically consist of segmented episodes or issues, covering various topics, stories, or features, just like a magazine. Digital magazine shows are accessible on internet platforms, websites, streaming services, or social media, catering to a digital-savvy audience.
Read more: LA Kings deliver mid-week magazine shows
- Markers → Video markers, also known as timecode markers or simply markers, are points of reference or notations placed within a video timeline to indicate specific events, scenes, or important moments. These markers serve as visual cues or bookmarks, making it easier for video editors, producers, or collaborators to navigate and identify specific sections of the video during the editing or post-production process.
Read more: Grabyo introduces custom video markers
- Mbps → In the context of video production, Mbps (megabits per second) refers to the data transfer rate or bit rate used to measure the amount of data transmitted or processed in a video stream over a one-second period. It is a critical metric that determines the quality, resolution, and smoothness of video playback or streaming.
- Media Player → A media player refers to a software or hardware device designed to play and display various types of multimedia content, including video and audio files. Media players are essential tools used by live production crews to preview, review and play out during live broadcasts. Media players typically have controls for playback including speed controls, play/pause and skipping.
Read more: Using media players in a cloud control room
- Media Sponsorship → Media sponsorship refers to a partnership or collaboration between a media rights holder and a sponsor to promote the sponsor’s brand within video content, typically using branded graphics or mentions of the sponsor. The sponsor provides financial or other support to the rights holder in exchange for exposure and visibility within content for their brand.
- Metadata → Metadata refers to descriptive or contextual information that provides additional details about a particular piece of data or content. In the context of digital media, including videos, metadata is structured data that describes various aspects of the media file, such as title, author, date, duration, resolution, language, and keywords. It may also include technical details like file format, codec, and bitrate.
Read more: Inserting metadata tags to VOD content
- MIDI Devices → MIDI devices refer to hardware devices that utilize the Musical Instrument Digital Interface (MIDI) protocol to communicate and exchange data with software platforms. This protocol allows video producers and editors to utilize physical MIDI devices to control and perform actions within video production software.
Read more: Using MIDI devices for live video production
- MPEG-TS → MPEG-TS (MPEG Transport Stream) is a standard container format used for the transmission and storage of audio, video, and other data in digital media. It is widely used in broadcasting, streaming, and video production applications due to its efficiency in transmitting high-quality multimedia content over various networks.
- Multi Platform Distribution → Multi-platform distribution refers to the process of distributing content across multiple platforms or channels. In the context of video production, it means making the same video content available on various platforms or streaming services, such as YouTube, Facebook, Instagram, Twitter, and other online video platforms.
The goal of multi-platform distribution is to reach a wider audience by leveraging different platforms’ user bases and content consumption behaviours.
- Multi Camera Production → Multi-camera production, also known as multicam production, is a video production technique that involves using multiple cameras to capture a live event or recording. In this approach, two or more cameras are strategically positioned at different angles or locations to capture various perspectives of the subject or scene simultaneously.
- Multistreaming → Multistreaming specifically refers to the act of live streaming video content to multiple platforms simultaneously. When live streaming, content creators can use specialised software or services to broadcast their live video feed to multiple platforms, allowing viewers to watch the same live content on different websites or social media platforms in real-time.
- Multiviewer → A video multiviewer is a specialised software solution used in video production and broadcasting to display multiple video sources on a single monitor simultaneously. It enables video professionals to monitor and analyse several video feeds or sources in real-time, offering a comprehensive view of the content being produced, edited, or broadcasted.
Read more: Using a browser-based multiviewer
- On-premise → On-premise technology refers to software, hardware, or technology solutions that are deployed and operated within an organisation’s physical premises or data centres. In this model, the organisation owns, manages, and maintains the technology infrastructure directly on-site, rather than relying on external cloud-based or off-site hosting services.
- OTT → OTT stands for “Over-The-Top,” and it refers to the delivery of audio, video, and other media content over the internet directly to users, bypassing traditional cable, satellite, or broadcast television platforms. OTT content can be accessed on various internet-connected devices, such as smartphones, tablets, smart TVs, gaming consoles, and computers, allowing users to watch or listen to content at their convenience.
- PiP → Picture-in-picture (PiP) is a video display technique that allows a smaller video or image to be displayed within a larger video or image, creating a “window within a window” effect. In this layout, the smaller video or image appears on top of the primary content, enabling viewers to simultaneously watch two or more videos or images at the same time.
Read more: Creating PiP layouts with computer vision
- Proxies → Video proxies are lower-resolution and smaller-sized versions of original video files used in video production and editing workflows. They serve as substitutes for the full-resolution, high-quality video files and allow video editors, producers, or collaborators to work with footage more efficiently, especially when dealing with large or high-resolution video files.
- Real Time Clipping → Real-time video clipping refers to the process of extracting and creating short video clips from a live video feed as it is being broadcasted or recorded. This capability allows video producers or editors to quickly select specific moments or highlights from the ongoing live video stream and instantly create short clips for immediate distribution or sharing.
Read more: Clipping from live feeds
- REMI → Remote production, or REMI (Remote Integration Model), refers to a production workflow that allows live content to be captured from remote locations, and managed from a central control room.
Effective remote production software, like Grabyo, combines broadcast-grade production tools with the flexibility of the cloud, to deliver live broadcasts and events in a collaborative, browser-based workspace.
Read more: Building REMI workflows
- Remote Commentators → Remote commentary refers to the practice of providing live or recorded commentary on video content from a remote location, rather than being physically present at the filming or recording site. This allows commentators, analysts, or hosts to contribute their insights, narration, or reactions to the video content while being geographically distant from the main event or production.
Read more: How major broadcasters leverage remote commentary
- Remote Guest → A remote guest refers to an individual or participant who is invited to appear on a video program, live broadcast, interview, or virtual event from a location that is separate from the main production site. The remote guest is not physically present in the same studio or event location as the video production team, host, or other participants.
Read more: Hosting remote talent on live broadcasts
- Remote Video Production → Remote video production refers to the process of creating video content, such as live events, broadcasts, interviews, or other video programs, from separate locations without the need for all participants to be physically present in the same studio or production site. In this production approach, various video and audio sources, equipment, and production elements are distributed across different locations, and video production teams collaborate remotely to produce the final content.
Read more: Remote production in the cloud
- RiST → RIST stands for Reliable Internet Stream Transport, which is a video transport protocol designed to ensure the reliable and secure transmission of video content over the internet. RIST was developed to address the challenges of transmitting high-quality video streams over unpredictable internet connections, such as those encountered in live video production and broadcasting.
- RTMP → RTMP (Real-Time Messaging Protocol) streaming refers to the process of delivering real-time audio, video, and data over the internet using the RTMP protocol, specifically designed for low-latency transmission where real-time consumption is essential. It enables live video streaming and interactive communication between the streaming server and the viewer’s device.
Read more: Broadcasting to custom RTMP destinations
- RTMPS → RTMPS is a variation of RTMP that uses encryption to add extra security to an RTMP stream. This ensures that only authorized end points are able to receive the stream. RTMPS can often be used interchangeably with RTMP, as long as live broadcasting platforms, such as Grabyo, support it.
- RTP → RTP stands for Real-time Transport Protocol. RTP is a network protocol used for the transmission of real-time audio and video data over IP networks, enabling the delivery of live streaming and interactive multimedia content.
- Amazon S3 → Amazon S3 (Simple Storage Service) is a cloud-based object storage service provided by Amazon Web Services (AWS). It is designed to store and retrieve any amount of data over the internet, making it a highly scalable and reliable solution for various storage needs.
Read more: How Amazon S3 works
- SaaS → SaaS stands for Software as a Service. It is a cloud computing model where software applications are delivered over the internet as a service. In the SaaS model, users do not need to install, manage, or maintain the software on their local devices; instead, they access the software through a web browser or dedicated application.
- SCTE → SCTE markers, also known as SCTE-35 markers, refer to the SCTE-35 standard developed by the Society of Cable Telecommunications Engineers (SCTE). These markers are used in digital video streams to indicate specific points or events for signal insertion or ad insertion during broadcast and distribution.
Read more: Inserting SCTE-35 ad markers into live productions
- Simulcasting → Simulcasting, short for “simultaneous broadcasting,” refers to the practice of transmitting the same content, such as a television program, radio show, or live event, across multiple platforms or channels simultaneously. The purpose of simulcasting is to reach a broader audience by distributing the content through various media outlets at the same time.
- Slo Mo → Slo-mo, short for slow motion, is a video effect used to show action or movement at a slower speed than it occurs in real-time. In the context of video production, slo-mo is achieved by recording footage at a higher frame rate than the normal playback speed and then playing it back at the standard frame rate, creating the illusion of slow-motion movement.
Read more: Slow-motion playback for instant replay
- Social Media Management → Social media management refers to the process of creating, planning, implementing, and monitoring an organisation’s social media presence and activities. It involves managing and optimising various social media platforms to build a brand’s online presence, engage with the audience, and achieve marketing and communication goals.
Read more: Social media management for businesses
- Speech-to-text → Speech to text refers to the process of converting spoken words or audio dialogue from a video into written text. It involves using automated transcription technologies and algorithms to analyse the audio content and produce a textual representation of the spoken words.
Read more: Automated captions for video editing
- SRT Streaming → SRT (Secure Reliable Transport) is a video streaming transport protocol designed to ensure secure, reliable, and low-latency video transmission over the internet. SRT was created and is maintained by Haivision, the video streaming technology company.
Read more: The SRT open source protocol | Deliver live video using SRT
- SSAI → Server-Side Ad Insertion (SSAI) is a digital advertising technique used in online video streaming. Advertisements are seamlessly inserted into live video output on the server side. Often, adverts are retrieved from a 3rd party ad server and inserted into the video stream before being delivered to an endpoint. This ensures a smooth viewing experience by reducing latency and eliminating buffering or on the viewer side.
Read more: Using SSAI with SCTE-35 markers
- SSO → SSO stands for Single Sign-On. It is an authentication process that allows users to access multiple applications or systems with a single set of login credentials. Instead of requiring users to log in separately for each application or website, SSO enables them to authenticate once and gain access to multiple resources seamlessly.
- Stream Sync → Stream sync refers to the process of ensuring that multiple video streams are synchronized and aligned properly during playback. It is essential to maintain perfect synchronization between various camera angles, audio sources, and other production elements to create a cohesive and seamless final video.
- Syndication Platform → A syndication platform refers to a specialised service or software that facilitates the distribution and publishing of video content across various social media platforms. It allows content creators and publishers to reach a broader audience and maximise the exposure of their videos by sharing them seamlessly on multiple social media channels. Syndication platforms are often used by media organizations to share content with employees and external partners for publishing.
Read more: Syndicating video content
- Transcoding → Transcoding in the context of video production refers to the process of converting a video file from one format or codec to another. It involves re-encoding the video data to a different compression format or resolution, typically to ensure compatibility, optimise file size, or improve video quality for specific distribution platforms or devices.
- User Permissions → User permissions, also known as access permissions or user rights, refer to the specific privileges and restrictions granted to individual users or user groups within a computer system, network, or application. User permissions determine what actions users are allowed to perform and what resources they can access or modify.
- Vertical Video Streaming → Vertical video streaming refers to the broadcasting or delivery of video content that is formatted to fit the vertical orientation of mobile devices, particularly smartphones (9:16). In contrast to traditional horizontal videos, which are wider than they are tall, vertical videos are taller than they are wide, matching the natural way people hold their phones.
Read more: Creating vertical video streams
- Video Automation → Video automation refers to the use of automated processes and tools, aimed to streamline and optimise various aspects of video content creation, clipping, and live production workflows. It involves leveraging technology to perform repetitive tasks, enhance efficiency, and improve speed to market.
- Video Collaboration → Video collaboration refers to the use of collaborative tools and platforms that enable seamless communication, coordination, and teamwork among various stakeholders involved in the video production process. Grabyo’s cloud-based software facilitates real-time interactions and collaboration, regardless of the physical locations of the team members.
- Video Editing → Video editing is the process of manipulating and rearranging video footage, audio, and other media elements to create a coherent and visually engaging video production. It involves selecting the best shots, trimming or cutting unnecessary parts, adding transitions, special effects, audio enhancements, and integrating other multimedia elements to craft a polished and compelling final video.
Read more: Video editing in the cloud
- Video Licensing → Video licensing refers to the legal agreement between the content owner or licensor and a third party, granting specific rights to use, distribute, or monetize video content. Licensing allows individuals, businesses, or organisations to use video materials that they do not own or have created themselves, while ensuring that the usage adheres to the terms and conditions set forth in the licence agreement.
- Video on Demand → Video on Demand (VOD) refers to a video streaming service or platform that allows users to access and watch video content at their convenience. With VOD, viewers have the flexibility to choose what they want to watch and when they want to watch it, as opposed to traditional broadcasting where content is scheduled and viewers must tune in at specific times.
- Video Monetization → Video monetization refers to the process of earning revenue or income from video content that rights holders own or control. Key aspects of video monetization include: licensing and distribution, advertising and sponsorships, subscription models, Pay-Per-view and video on demand.
Read more: Monetizing live video | Monetizing VOD content
- Video Switching → Video switching, also known as video switching or video mixing, is a fundamental process in live video production where a video switcher or production switcher is used to select and switch between multiple video sources in real-time. The purpose of video switching is to seamlessly transition from one video source to another, enabling dynamic and smooth video presentations, broadcasts, or live events.
Read more Video switching in a web browser
- Virtual Event Production → Virtual event production refers to the process of creating, managing, and delivering video content for online events or conferences. It involves all aspects of video production, from planning and recording to editing and streaming, with the specific goal of delivering high-quality and engaging video content for a virtual audience, across multiple digital channels.
Read more: A guide to producing virtual events
- WebRTC → WebRTC (Web Real-Time Communication) is an open-source project and a set of communication protocols and APIs that enable real-time audio, video, and data sharing directly between web browsers and mobile applications. It allows for peer-to-peer communication without the need for plugins or third-party software, making it easy to implement real-time communication features in web and mobile applications.
Read more: Remote guest contribution using WebRTC
- Zixi → Zixi is a video streaming technology designed to optimise video delivery over unreliable or congested internet connections, ensuring high-quality, low-latency, and error-free video transmission.
Read more: Ultra-low latency delivery with Zixi | Deliver live broadcasts using Zixi
Stay in touch.
Join over 10,000 media professionals and register to receive our monthly newsletter directly to your inbox!