"Ask Matillion Anything: Connectors" Begins TODAY!

Hello, hello!

 

Let's start the AMA today and keep going all week! Please drop your questions in the comments below!

 

Does matillion use cdata in backend?

Instead of specifying connection details manually at every data query node, can matillion have a central connection manager?

add bulk load features in matillion? i know a separate product called data loader exists but then can it be integrated seemlessly?

with matillion, do we get logs at smallest granular level?

Yes! We utilise CData drivers for some of our connectors.

Hi Manoj,

 

Streamlined connection and credential management is something we're actively looking into right now for Data Productivity Cloud - we want to make managing connections to external services as easy and secure as possible.

 

I would be really interested in hearing about any requirements you have regarding using, storing and managing connections, so if you have any additional thoughts please let me know.

 

Thanks!

Watch this space! With the launch of the Data Productivity Cloud we are working on delivering a seem-less journey between all of our products.

any new connectors in pipeline in future releases?

any plans of adding smart intelligence features in matilllion like copilot in future releases?

what is difference between data productivity cloud and the regular matillion tool?

any plans on having in-built ci-cd capabilities for easy deployments?

current git workflow looks a bit complex from new user point of view who is new to dataops/devops who came from checkin/checkout background. Is feedback of all your users that they find it easy or difficult? if difficult, any improvements for easier workflow in pipeline?

can filter transformation component be improved to add complex filters? current one either suports OR or AND and not a mixed combination. sql transformation is only option in this case

any plans on having easier scd 2 transformation with in-built capabilities in writer or transformation component the way dbt is doing?

current azure blob unload supports either table/queries and not queries directly. sql script component is needed here to achieve this which needs configuration of credentials in copy into command. anything in pipeline to improve the same?

can python/shell script components be modified to call files from local machine or git repository? any changes need changes in orchestation pipelines and reusability is not achieved if same script is called at multiple places?

please include support for mail component in orchestration pipeline with good mail body formatting and attachment support.

Hi Manoj,

 

On most of our components we offer levels of logging 1 through 5, with 5 being really granular logging. There are also system wide logs, which are very detailed.

 

Is there particular log information you're interested in?

 

Thanks!

Hi Manoj,

 

We have quite a few new connectors in the works, we have recently released Pendo, Datadog, Brevo and many more to Matillion Data Loader in the Data Productivity Cloud.

 

Are there any specific connectors you're hoping we release soon?

 

Thanks!