Is there any better way that to convert redshift table to json or plug in that you can share ?
below is the query or python script that I am using
import json
cur = context.cursor()
cur.execute(‘SELECT * FROM sample_table’)
rows = cur.fetchall()
json_payload =
row_dict = {}
row_dict['column1 '] = rows[0]
row_dict[‘column2’] = rows[1]
row_dict[‘column3’] = rows[2]
json_payload.append(row_dict)
json_string = json.dumps(json_payload)
print(json_string)
I have tried to improve the solution, and here is the improvement and i will add the next step to write data into s3.
import json
cursor = context.cursor()
sql_query = ‘SELECT * FROM sample_table’
cursor.execute(sql_query)
columns = [column[0] for column in cursor.description]
data = [dict(zip(columns, row)) for row in cursor.fetchall()]
print (data)
Hello @omfdataoptimisation
Thank you for your post and welcome to the Matillion Community, you ask a great question and I wanted to share a solution from our team.
An option would be to use the Query Result to grid (Documentation here)
You would then need to use something similar to the python script below:
`# Create a dictionary to hold the grouped data`
`grouped_data = {}` `# Iterate over the list of files`
`for file_name, file_type, download_url in pv_attached_files:`
` if file_type not in grouped_data:`
` grouped_data[file_type] = []`
` grouped_data[file_type].append((file_name, download_url))` `# Convert the dictionary to a single string`
`resulting_string = ""` `for file_type, files in grouped_data.items():`
` resulting_string += f"{file_type}:\n"`
` for file_name, download_url in files:`
` resulting_string += f" {file_name}: {download_url}\n"`
` resulting_string += "\n"` `resulting_string = resulting_string.strip() # Remove any trailing newlines` `print(resulting_string)` `context.updateVariable('pv_proccessed_files', resulting_string)`
This would enable you to get the results you require without the use of Jython.I hope this helps, please let me know if you need anything else.
Kind regards, Joe