I have a Python script stored in an S3 bucket. I'd like to have it run in AWS (an EC2 instance presumably) and save its output (a pickle file) back into the same S3 bucket.
In the Python script itself, you specify a filename and just call to_pickle:
def metadata_df(search_api,hashtags,since,until,filename,lat_long_only=True):
if os.path.exists(filename):
df = pickle.load(open(filename, 'rb'))
else:
df = ...
df.to_pickle(filename)
return df
...
if __name__ == "__main__":
pickle_name = yesterday+'_'+'tweets.pkl'
metadata_df(api.search, hashtags, since=yesterday,until=today, filename=pickle_name,lat_long_only=True)
...
Wondering how I go about doing this (only need to run this a single time).