In the previous post, I walked through how to upload a file from an HTTP multipart client, to an IBM Cloud Function, and how to persist that file on IBM Cloud Object Storage (COS). In this post I will explore getting the file back out of COS and downloaded to an HTTP client.

Get Outta My COS

And into my car! Ahem ...

IBM Cloud Object Storage is compatible with AWS S3. Just as in the previous post we will use the IBM fork of that SDK for Node.js. And the first step is to instantiate the client. If you are confused on where to get the various credentials, I cover that in detail in the previous post as well.

let cos = new AWS.S3( {
  endpoint: params.COS_ENDPOINT,
  apiKeyId: params.COS_API_KEY,
  ibmAuthEndpoint: params.COS_AUTH_ENDPOINT,
  serviceInstanceId: params.COS_SERVICE_INSTANCE
} );

cos.getObject( {
  Bucket: params.COS_BUCKET,
  Key: params.name
} )
.promise()
.then( ( data ) => {
  console.log( data );
} );

The "getObject()" call on the COS client requires the name of the "Bucket" where the object resides, and the name of the object you want to get. Note that name and "Key" are synonymous. This is effectively the path and file name of the object you want to download.

Watch out! If the file does not exist, the SDK will throw an error effectively halting the execution of the function. You may want to do a bit more error catching than is demonstrated in the snippet above.

And Into My Function

There are a couple subtleties that we need to consider for the Cloud Function side of this equation. The first is that the COS client is promise-based, which means we need to wait for it to finish before terminating the function itself. The second is what exactly do we return?

function download( params ) {
  const AWS = require( 'ibm-cos-sdk' );

  let cos = new AWS.S3( {
    endpoint: params.COS_ENDPOINT,
    apiKeyId: params.COS_API_KEY,
    ibmAuthEndpoint: params.COS_AUTH_ENDPOINT,
    serviceInstanceId: params.COS_SERVICE_INSTANCE
  } );

  return cos.getObject( {
    Bucket: params.COS_BUCKET, 
    Key: params.name
  } )
  .promise()
  .then( ( data ) => {
    return {
      headers: { 
        'Content-Disposition': `attachment; filename="${params.name}"`,
        'Content-Type': data.ContentType 
      },
      statusCode: 200,
      body: Buffer.from( data.Body ).toString( 'base64' )
    }; 
  } );
}

exports.main = download;

In the previous post we wrapped our callback-filled parsing code in a "Promise" and returned that reference. That kept the function running. In this case, the COS SDK is promise-based, and we can then return the promise generated by the call to "getObject()" itself.

As far as what to return, we have to stops. The first is in the headers of the return value. In order to prompt for a download, we need to set the header "Content-Disposition". The result from the "getObject()" call, will have the appropriate content type we can use in the header as well.

The "body" of the return object should be the contents of the file, Base-64 encoded. The bytes that make up the file are in the "data" object result from the "getObject()" call in a property labeled "Body". We can put that into a "Buffer" instance, and leverage "toString( 'base64' )" to get the Base-64 encoded content.

Moar Integration!

At this point we can read, edit, and add for our BREAD operations. We are so close, and the code is so similar, that rather than make another blog post, I will round out the operations in the following code snippets. First up, browse.

Browse the Objects in a Bucket

The AWS S3 documentation has "listObjects()" and "listObjectsV2()" and suggest that it prefers the later. The "listObjectsV2()" takes the argument "Bucket" name, and will only return 1,000 items. If you have more than 1,000 items in your bucket, you will need to page through them.

return cos.listObjectsV2( {
  Bucket: params.COS_BUCKET
} )
.promise()
.then( ( data ) => {
  let body = [];

  for( let c = 0; c < data.Contents.length; c++ ) {
    body.push( {
      name: data.Contents[c].Key,
      etag: data.Contents[c].ETag.replace( /"/g,"" ),
      modified: data.Contents[c].LastModified,
      size: data.Contents[c].Size
    } );
  }
  
  return {
    headers: {
      'Content-Type': 'application/json'          
    },
    body: body
  };      
} );

The "data" object in this case is an array, where each object in the array has a "Key" property (file name), "LastModified", "Size", and "ETag". You can effectively return the data as-is. I am being picky here in forcing lowercase keys. The "ETag" value is also enclosed in nested quotes, so I clean that up to get get the raw string as the value.

Delete an Object in a Bucket

Finally we come to removing an object from a bucket. This is pretty much the same as getting an object list from a bucket. And just like the listing, you can return the call to "deleteObject()" and the resulting promise to keep the function running. The return from a single deletion is ... nothing. I return the file name that was just deleted as a courtesy.

return cos.deleteObject( {
  Bucket: params.COS_BUCKET,
  Key: params.name
} )
.promise()
.then( ( data ) => {
  return {
    headers: {
      'Content-Type': 'application/json'
    },
    statusCode: 200,
    body: {
      name: params.name
    }
  };
} );

The parameters passed to "deleteObject()" include the "Bucket" (file path) and "Key" (file name).

Next Steps

We can now leverage all the BREAD operations (browse, read, edit, add, delete) from within an IBM Cloud Function, persisting files to IBM Cloud Object Storage. From here we could build our own serverless-based file manager. I am kicking around the idea of a VSCode plug-in to manage those files in my COS buckets.