database - Mule ESB data flow design -
I recommend how to implement flow in mining studios I need to create a web service based on that data Which has returned from the database's three processes. That is, I need to run three processes in order to get the required data. Each subsequent process uses the results of the previous process as input parameters. So ... which is better:
- To write a process, which will run three processes from the database and ultimately generate the desired data for the web service
- Studio Call three processes, which ultimately creates the desired array of data (I know now how to store the result of the first process in the field to run the previous process)?
- Which point (1 or 2) will work faster?
-
Is it necessary to combine all the three processes Archestrating is your decision.
-
The result of each process call will automatically become your payload flow of mules, so that the data becomes available for the next process. If you want to store information from each process, then you can use a
monstrous
, which will store the result in a flow variable, session variable or the message property of your choice:
If you want to run each process together and collect three responses, you can see the scatter-collect
Find some other ways Can also use Google for aggregators
. Point 1 can be potentially fast because you are opening fewer connections for DB etc. but it all depends on your processes and so on. And you benefit from being the arcester in the mule. I'll try both of them out and see how your needs fit.
Comments
Post a Comment