How to: Upload (Real-time)
Introduction
Building on our basic upload knowledge, let’s explore uploading assets in real-time as they’re being created. This approach enables uploading files during recording, rendering, or streaming before their final size is known.
The Real-time Uploads API allows assets to become playable in Frame.io just seconds after recording completion, significantly enhancing workflow efficiency.
Demo Video
For a quick preview of this functionality, watch our video demonstration. The demo shows a render being uploaded from Adobe Media Encoder in real-time, with the video playable in Frame.io only 5 seconds after rendering completes.
Prerequisites
If you haven’t already, please review the Implementing C2C: Setting Up guide.
You’ll need the access_token
obtained during the authentication and authorization process.
We’ll use the same test asset from the basic upload guide for our examples.
Familiarity with the Basic Upload guide is recommended, as we’ll build on those concepts.
Creating a Real-time Asset
Real-time uploads begin with a modified asset creation process. When creating the asset, set is_realtime_upload
to true
and omit the filesize
parameter (or set it to null
), since the final size isn’t known during creation:
API endpoint specification
Documentation for /v2/devices/assets
can be found here.
Extension and filename
Real-time assets require a file extension. If the filename isn’t known when creating the asset, you can use the extension
field instead (format: '.mp4'
). This approach is preferred when you plan to update the asset name later.
The response for real-time assets is simplified compared to standard asset creation:
Note that upload_urls
is absent—for real-time uploads, we’ll generate upload URLs on demand as the file is created.
Requesting Upload URLs
Let’s request a URL for the first half of our file (10,568,125 bytes), using the asset_id
from the previous response:
API endpoint specification
Documentation for /v2/devices/assets/{asset_id}/realtime_upload/parts
can be found here.
Understanding the request parameters:
parts
: A list of upload parts for which we need URLs. Requesting multiple URLs in a single call improves efficiency.number
: The sequential part number, starting at 1. Numbers can be skipped and parts uploaded in any order, but they’ll be assembled sequentially. Cannot exceed 10,000 (AWS limit).size
: Part size in bytes. Must comply with AWS Multi-Part upload restrictions.is_final
: Indicates whether this is the final file part.
The response contains the requested upload URLs:
The upload_urls
list corresponds directly to the parts
request order.
Now upload the first chunk as in the basic upload guide:
Next, request a URL for the second and final part:
Note these important additions:
is_final
is set totrue
for the last part, signaling that the upload will complete after this chunkasset_filesize
provides the total file size, which is required when any part hasis_final: true
After receiving the URL in the response:
Upload the final chunk:
Final part handling
When the final part is uploaded, Frame.io begins assembling the complete file. This process includes a 60-second grace period for any remaining parts to complete. We recommend uploading the final part only after all other parts have been successfully uploaded.
That’s it! Navigate to Frame.io to see your successfully uploaded real-time asset. 🎉
Managing Asset Names
If the filename isn’t known during asset creation, you can use the extension
field without a name
:
The system will assign a default name:
You can update this name by including an asset_name
field when requesting upload URLs:
The name will only update if the asset still has its default name; if it’s been renamed in the Frame.io UI or previously updated, the request will be ignored.
Optimizing URL Requests
For efficiency, request URLs for as many parts as you currently have data available, rather than individually. This approach is particularly valuable for large files where upload speed might lag behind data generation.
Handling Media File Headers
Some media formats require headers at the beginning of the file that aren’t written until the entire file is complete. This creates a challenge when the header is smaller than AWS’s minimum part size of 5 MiB (5,242,880 bytes).
Our recommendation:
- Reserve the first 5,242,880 bytes of media data without uploading
- Begin uploading parts starting with
part_number=2
- When the file is complete, prepend the header to the reserved data
- Request a URL for
part_number=1
and upload this combined chunk
This approach ensures your first chunk meets the minimum size requirement while preserving proper file structure.
Scaling Part Size for Optimal Performance
AWS imposes limits that affect upload strategy:
- Maximum file size: 5 TiB (5,497,558,138,880 bytes)
- Maximum number of parts: 10,000
- Minimum part size: 5 MiB (5,242,880 bytes)
A fixed part size creates trade-offs:
- Using the minimum size (5 MiB) for all 10,000 parts limits total file size to ~52.4 GB
- Evenly distributing the maximum file size would require ~550 MB chunks, too large for efficient streaming of smaller files
We need a formula that balances these constraints, starting with small parts for responsive uploads while ensuring we can handle very large files if needed.
Recommended Part Size Formula
Here’s our suggested approach in Python:
… where part_number
is between 1
and 10_000
, inclusive and format_bytes_per_second
is the expected average number of bytes your file is expected to consume per second. We’ll go over how the formula was reached further in.
Scalar value
The scalar
variable and calculation might be a little perplexing at first glance, but it is a mathematical tool that ensures no matter what value we use for format_bytes_per_second
, if we feed all allowed part_number
values from 1
to 10_0000
into the function, we will receive a set of values that totals to exactly our 5 TiB filesize limit — well, as exactly as possible. We show our work further in on how we came to this formula.
Floor Rounding
By using floor rounding, we leave some bytes on the table, but ensure that regular rounding over 10,000 part does not accidentally cause us to exceed our maximum allowed filesize. At most, 10,000 bytes, or 10 KB will be left on the table this way, an acceptable tradeoff.
The important characteristics of this formula are:
- When uploading 10,000 parts, the total amount of data uploaded will be within 10 KB of our 5 TiB filesize limit.
- Optimizes for smaller, more efficient payloads at the beginning to increase responsiveness for short and medium-length clips.
- Very long clips will have reduced responsiveness between the end of a file being written and it becoming playable in frame.
The tradeoff between the second and their point is mitigated by the fact that most clips will not reach the size where point three comes into play. We are trading increased responsiveness of MOST files for decreased responsiveness of very few.
A more advanced and efficient version of our formula (that generates an anonymous part_size_calculator
function with our static scalar and data rate precomputed and baked in) might look like this:
How the formula performs.
Let’s examine the output characteristics of the formula above over several common file types.
Example 1: Web format
For web-playable formats with a rate of ~5.3MB/s or less (most H.264/H.265/HEVC files), we will get a payload-size progression that looks like so:
Table columns key
Total Parts
: the total number of file parts uploaded to AWS.Payload Bytes
: the size of the AWS PUT payload whenpart_number
is equal toTotal Parts
.Payload MB
: AsPayload Bytes
, but in megabytes.Total File Bytes
: the total number of bytes uploaded for the file whenTotal Parts
sequential parts have been uploaded.Total File GB
: AsTotal File Bytes
, but in GB.
These values are nicely balanced for real-time uploads, especially of web-playback codecs like H.264; most will be under 10.7 GB, and therefore be completed within 1,000 parts. The payload size would never exceed 21.6 MB.
If we chewed halfway through our parts, the payload size would still never exceed 413.5 MB. The upload would total 707 GB, more than enough for the vast majority of web files.
It is only once we near the end of our allowed part count that the file size begins to balloon. However, it never exceeds 1.7 GB, well below the AWS limit of 5 GiB per part.
Example 2: Prores 422 LT
Prores 422 LT has a data-rate of 102 Mbps and generates a table like so:
This table reveals useful properties compared to our web-optimized formula. Within the first 1,000 parts, we are able to upload 8 GB more of file. Larger initial payloads mean we will not need to request URLs too quickly at the beginning, making the upload more efficient for the higher data rate. Our payload size at the tail of the upload process remains large.
Example 2: Camera raw
Finally, let’s try a camera RAW format that has a data rate of 280 MB/s. With data coming this fast, trying to upload in 5 MiB chunks at the beginning just doesn’t make sense:
Not only are early payloads more efficient, but we are saving over half a gig at the upper end, which will make those network calls less susceptible to adverse network events.
Showing our work
Before we pull everything together into an example uploader, let’s see how we arrived at our formula.
What we needed to do was come up with a formula that traded large, heavy payloads at the end of our allowed parts — which most uploads will never reach — for light, efficient payloads near the beginning, where every upload can take advantage. At the same time, we wanted to ensure that our algorithm will land in the ballpark of the 5 TiB filesize limit right at part number 10,000.
It was time to break out some calculus.
We want our graph to grow exponentially, so our formula should probably look something like:
… where n
is the part number. We also want to ensure each part is, at minimum, the data rate for our formula, which we will call r
:
Now we need to find a formula which can tell us the sum of this formula for the first 10,000 natural numbers (1, 2, 3, …). The sigma Σ
symbol denotes summation. Let’s add it to our formula:
… and redefine n
as the series of natural numbers between 1 and 10,000, inclusive.
The equation is not very useful to us yet. It has the right intuitive shape, but if we set n=10,000
and r=5,242,880
like we want to, it just spits out a result: 385,812,135,000
(385 GB). Not only is the result far below our filesize limit of 5 TiB, there is no way to manipulate the formula to spit out that result.
Lets give ourselves a dial to spin:
… where x
is scalar we can solve for to get 5 TiB as the result. Now we can set the equation equal to our filesize limit and solve for x
:
Often, summations must be solved iteratively, as in a for
or while
loop. But it turns out there is a perfect formula for us: a known way of cheaply computing the sum of the square for the first n
natural numbers:
Rearranging it into a polynomial makes it easier to look at:
We can add our variables, x
and r
, to both sides:
And finally we set our new formula equal to 5 TiB:
Now all we need to do is solve for x
by setting n=10,000
, our total part count. This will give us a way to compute a static scalar for a given data rate.
Rather than doing this by hand lets plug it into Wolfram Alpha:
Now we’re getting somewhere! If our data rate was the minimum part size (5 MiB), we would get a static scalar of:
In computerland, this represents a float64 value of 16.33293799427617
. Our formula to determine part size in this instance would be:
Where s
is our part size.
We still have one more problem. In the real world, we can’t have a payload with non-whole bytes. We need to round each value. We’ll use Python, and round down:
We have arrived at a concrete example of the original function given in this guide.
Building a basic uploader
Let’s take a look at some simple python-like pseudocode for uploading a file being rendered in real time, using everything we have learned in this guide:
Advanced uploading
The code above only demonstrates the basic flow of uploading a file in real time. In reality, this logic will need to be enhanced with error handling and advanced upload techniques.
Next Up
Real-time uploads offer a way to make your integration as responsive as possible, with assets becoming playable in Frame.io seconds after they have finished recording. A later guide will cover advanced uploading techniques and requirements. Although it is written with basic uploads in mind, the majority of the guide will still be applicable to real-time uploads.
If you haven’t already, we encourage you to reach out to our team, then continue to the next guide. We look forward to hearing from you!