How to: Upload (Basic)

Basic uploading tutorial


Introduction

It’s happening!!! We’re going to upload an asset to Frame.io! This is what we are all here for; let's step up to the plate!

What will I need?

If you haven’t read the Implementing C2C: Setting Up guide, give it a quick glance before moving on!

You will also need the access_token you received during the C2C hardware or C2C Application authentication and authorization guide.

In this guide we will be using a test asset found at this Frame.io link. Hit the Download button to download it. Using the example file will help you match all the values in our example curl commands.

Step 1: Creating an asset

Let’s say we want to upload a file, which our device created 10 seconds ago. We can create our asset in Frame.io like so:

Shell
{
curl -X POST https://api.frame.io/v2/devices/assets \
    --header 'Authorization: Bearer [access_token]' \
    --header 'Content-Type: application/json' \
    --header 'x-client-version: 2.0.0' \
    --data-binary @- <<'__JSON__' 
        {
            "name": "C2C_TEST_CLIP.mp4", 
            "filetype": "video/mp4", 
            "filesize": 21136250,
            "offset": 10
        }
__JSON__
} | python -m json.tool
API endpoint specificaton

Docs for /v2/devices/assets can be found here. This endpoint used to be /v2/assets, and while that endpoint will continue to function, we ask that new integrators use /v2/devices/assets.

JSON encoding

This endpoint does not accept form/multipart encoding like we have been using up to this point, so instead we are using application/json. application/x-www-form-urlencoded is also accepted.

CMD syntax

In the above command we are using heredoc to pipe the JSON data into curl while allowing multi-line string syntax to make the payload more readable. See more about this approach here if you are interested. --data-binary @- tells curl to use raw data from stdin as the payload.

Let’s go over the JSON payload params:

name: The name the asset should have in Frame.io. This value does not have to match the name of the file on disk; it can be whatever you want it to be in Frame.io.

filetype: The mime type of the asset. Many languages have built in utilities for detecting file mimetype (see Go and Python for examples).

filesize: The size of the file in bytes. The above file is ~21.1 MB.

offset: The number of seconds since the file was created. Defaults to 0 if not set. Although the offset parameter is technically optional, we will require that integrations supply it. Offset is how we handle detecting when a file should be rejected due to a device being paused. We will go over the importance of the offset parameter in the advanced uploading guide.

We should get a response like so (lots of data elided):

JSON
{
    "_type": "file",
    ...
    "id": "9a280f99-8f4f-46b0-a4b4-ec4c2f95138e",
    ...
    "upload_urls": [
        "https://frameio-uploads-production.s3-accelerate.amazonaws.com/parts/[part_01_path]",
        "https://frameio-uploads-production.s3-accelerate.amazonaws.com/parts/[part_02_path]"
    ],
    ...
}

At this point, all we have done is tell Frame.io that we intend to upload an asset, but no data from our file has been uploaded yet. If you navigate to your device’s folder in your project, you should see that a placeholder asset has been created and is in the uploading state.

upload_urls is the response field we are most interested in. Those are the urls we are going to upload chunks of our asset to. For our test file, we should have received only 2 upload URLs in the response.

Step 2: Splitting our file into chunks

Our response payload included multiple URLs*l When we upload our asset to Frame.io, we are going to split the file up into chunks, then upload those chunks individually. We get a couple benefits from chunking:

  • Better reliability. We don’t have to completely start over if one of our uploads fails partway through. If we were to require uploading very large files in a single call, then there would be a higher chance of large files failing.
  • Faster uploads: We can upload multiple chunks of the same file in parallel, allowing for faster uploads. We'll cover parallel uploads in more depth in a later guide.

To determine the chunk size, follow the following formula:

Python
# We use `math.ceil()` in order to ensure we get the upper bound in the division here
chunk_size = math.ceil(float(file.size)) / float(len(response)))

Using this formula, the chunk size for our file is 10568125 bytes.

Python
math.ceil(21136250 / 2)
# 10568125

Chunk size will vary depending on the number of links you receive. In general, file chunks will be around ~25MB. How file chunks are calculated will be discussed further in a later guide.

Last chunk size

Because not all numbers can be divided evenly, the last chunk of the file might be slightly less than the chunk_size. Depending on your language you may or may not need to account for that when reading file chunks.

In the next section we will use the head and tail commands to get the first and second chunk of our file.

Step 3: Uploading our chunks

To upload our first chunk, we will make the following call:

Shell
head -c 10568125 ~/Downloads/C2C_TEST_CLIP.mp4 | \
curl -X PUT https://frameio-uploads-production.s3-accelerate.amazonaws.com/parts/[part_01_path] \
        --include \
        --header 'content-type: video/mp4' \
        --header 'x-amz-acl: private' \
        --data-binary @-
CMD syntax

--data-binary @- tells curl to use raw data from stdin as the payload, which we are supplying through head.

We need to add two headers to the request:

content-type: The same filetype value we used in step 1.

x-amz-acl: For AWS. Will always be private.

We should get a result like so:

HTTP/1.1 100 Continue

HTTP/1.1 200 OK
...

And our second chunk:

Shell
tail -c 10568125 ~/Downloads/C2C_TEST_CLIP.mp4 | \
curl -X PUT https://frameio-uploads-production.s3-accelerate.amazonaws.com/parts/[part_02_path] \
        --include \
        --header 'content-type: video/mp4' \
        --header 'x-amz-acl: private' \
        --data-binary @-

Should yield the same result. When we go to the media in Frame.io, the clip should now be playable! 🎉🎉🎉

Upload errors

When you upload a file chunk, you are sending the data directly to AWS S3. The errors returned from this endpoint will not be normal Frame.io errors, and should be handled appropriately. We will go over handling S3 errors in more detail in the error handling guide.

Chunk order

Although it’s easier to think about chunks being uploaded sequentially, chunks can be uploaded in any order. Try running the uploads in reverse; everything will work as expected!

Putting it all together

Let’s take a look at some simple python-like pseudocode for uploading a file:

Python
file = open("~/Downloads/C2C_TEST_CLIP.mp4")
mimetype = mimetypes.for_file("~/Downloads/C2C_TEST_CLIP.mp4")[0]
created_at = time.ctime(file.stat.ST_CTIME)

asset = c2c.asset_create(
    name="C2C_TEST_CLIP.mp4", 
    filetype=mimetype, 
    filesize=file.size,
    offset=datetime.now() - created_at,
    channel=0,
)

chunk_size = math.ceil(float(file.size) / float(response.upload_urls.length()))

for chunk_url in asset.upload_urls:
   chunk = file.read(bytes=chunk_size)
   c2c.upload_chunk(chunk, chunk_url, mimetype)

This example does not include any error handling or parallelism, which will be covered in the and error handling and advanced upload guides, respectively.

Next up

Congratulations!!! You have uploaded you first asset to Frame.io. A later guide will cover advanced uploading techniques and requirements.

If you haven’t already, we encourage you to reach out to our team, then continue to the next guide. We look forward to hearing from you!