There are no built in actions in Power Automate to Parse a CSV File. There are external connectors which can do this for you, but this blog post will cover how to Parse a CSV in Power Automate without the use of any external connectors. The aim is to end up with a JSON array that we can use in other actions. I’ve exported this flow and you can download it here.
An easier way!
As you can see in this article, parsing CSV files in Power Automate is a lot of work. But it doesn’t have to be!
You can follow the guide below to extract the data step by step using multiple actions and expressions, or you can do it with a single action I developed! The choice is yours!
Here is a video explanation of the process:
For the purposes of this blog post, the sample CSV will have the following contents:
Session Data by SSID,,,,, SSID,Session Count (%),Client Count (%),Duration (%),Total Usage (%),Usage (In/Out) test1,90 (54.22%),26 (48.15%),1d:11h:35m (62.06%),939.09MB (50.69%),814.94MB/124.15MB -,36 (21.69%),13 (24.07%),0d:2h:55m (5.09%),0.00B (0.0%),0.00B/0.00B test2,21 (12.65%),13 (24.07%),0d:8h:35m (14.97%),538.12MB (29.05%),500.54MB/37.58MB
We are going to transform it into this:
[ { "SSID": "test1", "sessionCount": "90 (54.22%)", "clientCount": "26 (48.15%)", "duration": "1d:11h:35m (62.06%)", "totalUsage": "939.09MB (50.69%)", "usage": "814.94MB/124.15MB" }, { "SSID": "-", "sessionCount": "36 (21.69%)", "clientCount": "13 (24.07%)", "duration": "0d:2h:55m (5.09%)", "totalUsage": "0.00B (0.0%)", "usage": "0.00B/0.00B" }, { "SSID": "test2", "sessionCount": "21 (12.65%)", "clientCount": "13 (24.07%)", "duration": "0d:8h:35m (14.97%)", "totalUsage": "538.12MB (29.05%)", "usage": "500.54MB/37.58MB" } ]
The first thing to note is that the first two lines of this CSV need to be excluded, because they do not contain any data. So lets get started!
Step 1 – Get the CSV Data and Split it into lines
The first thing is to get the CSV data and split it into lines:
This compose action will use the split function to convert the original CSV into an array of lines, it will now look like this:
[ "Session Data by SSID,,,,,", "SSID,Session Count (%),Client Count (%),Duration (%),Total Usage (%),Usage (In/Out)", "test1,90 (54.22%),26 (48.15%),1d:11h:35m (62.06%),939.09MB (50.69%),814.94MB/124.15MB", "-,36 (21.69%),13 (24.07%),0d:2h:55m (5.09%),0.00B (0.0%),0.00B/0.00B", "test2,21 (12.65%),13 (24.07%),0d:8h:35m (14.97%),538.12MB (29.05%),500.54MB/37.58MB", ]
The expression I used was:
split(outputs('Get_file_content_using_path')?['body'], decodeUriComponent('%0A'))
If your file is stored in SharePoint you will use the action “Get file content”, so the expression will be:
split(outputs('Get_file_content')?['body'], decodeUriComponent('%0A'))
This flow uses only compose actions. There is a very good reason for this which I will come to later. There are no variables whatsoever.
Important note regarding line endings
I used the decodeUriComponent function, to split the CSV.
decodeUriComponent('%0A')
This represents a new line feed character (LF), often displayed as \n. This is the Unix standard.
CSV Files generated in Windows, may use this format but often use a carriage return and line feed (CR+LF). This is represented as \r\n.
The split expression above will still work with CR+LF, but you will be left with \r characters in your data. The correct expression to split on a CR+LF is:
decodeUriComponent('%0D%0A')
If you load your CSV file into Notepad you can easily see which format your file is in, in the bottom right hand corner it will show either “Unix (LF)” or “Windows (CR LF)”.
Step 2 – Process each line of the CSV and create JSON record for each line
Now that we have our CSV formatted as an array, we can loop through each line. Take a look at this loop:
In the select an output from previous steps I used an expression, which requires a bit of explanation. The expression is:
skip(outputs('splitNewLine'),2)
The skip function returns an array, but removes items from the beginning of the collection. My sample CSV had two lines at the beginning which I did not want to include. So by using the Skip function, they will not be sent into the apply-each loop.
This loop will process each line individually, so every iteration will be working with a single line.
The first compose action splits the incoming line on a comma, here is the expression:
split(item(), ',')
This will produce yet another array, within the loop, which contains each of the column values, sample from example above:
[ "test1", "90 (54.22%)", "26 (48.15%)", "1d:11h:35m (62.06%)", "939.09MB (50.69%)", "814.94MB/124.15MB" ]
The next compose action, called JSON is more interesting. First, manually build a JSON string which represents an empty record, like this:
{ "SSID": "", "sessionCount": "", "clientCount": "", "duration": "", "totalUsage": "", "usage": "" }
Then, within the quotes for each column header you can use an expression to access each element of the splitByComma action:
outputs('splitByComma')?[0]
This expression represents the first element of the array produced by the previous compose step. Arrays in Power Automate are numbered from zero. My complete compose action has this code:
{ "SSID": "@{outputs('splitByComma')?[0]}", "sessionCount": "@{outputs('splitByComma')?[1]}", "clientCount": "@{outputs('splitByComma')?[2]}", "duration": "@{outputs('splitByComma')?[3]}", "totalUsage": "@{outputs('splitByComma')?[4]}", "usage": "@{outputs('splitByComma')?[5]}" }
This is everything you need to include in the loop. You might be wondering why this is of any use as the JSON data has not been added to a variable, so you have no method of accessing it! A secret of apply-each loops is that they output an array of output, so our final compose step has the simple expression:
outputs('JSON')
This will compile all of the results from the compose action within the loop, into a nice array containing JSON objects.
Performance
The flow above will work fine, but you may find it takes a long time to execute with a large CSV file. Because no variables have been used, only compose actions, the concurrency setting of the loop can be adjusted so that it will work on many records simultaneously.
With the three sample rows, the flow took just 1 second to run without concurrency enabled.
I increased the number of rows to 500, and without concurrency it took 1 minutes and 46 seconds to execute.
Enabling concurrency with 30 degrees of parallelism reduced the execution time to 7 seconds! Not a bad improvement.
The Complete Flow
Here is an image of the entire flow, to make your life easier I have exported the flow. You can download it here, import it into your environment and modify to your requirements:
This is a very simple flow, which you can recreate easily. You could add a Parse JSON step after the final compose and then the contents of your CSV will be available as dynamic content in the remainder of your flow.
I hope this helps you to Parse your CSV files, reach out if you need any help!
Support This Website
Thank you for visiting. It’s because of readers like you that I continue to write on this blog.
If you found value in what I share, I’ve set up a ‘Buy Me a Coffee’ page as a way to show your support.
Jericho Rosales says
hi there, im very interested in your flow above. this is exactly what i need. next question is how can i upload the content to a sharepoint list?
Paulie says
Hi Jericho, you could add a parse JSON step and then an apply-each loop. Within the apply each loop use a create item action and add your items to SharePoint. Let me know if you get stuck and I will help you out.
Jericho Rosales says
Let say i have the below field in my SharePoint list named : tblTestData. the data is based on your sample data above. . how can i save the values of these field into my sharepoint list tblTestData. let say i have same field name in the list as below. Please help. i have same scenario that i need to upload CSV files into SharePoint list. Thanks in Advance
“SSID”: “”,
“sessionCount”: “”,
“clientCount”: “”,
“duration”: “”,
“totalUsage”: “”,
“usage”: “”
Jericho Rosales says
Hi Paulie, im super newbie on this. i appreciate if you can assist me. Do not know what to create in New Step after the completeJSON step. how can i add these values in the sharepoint list.
“SSID”: “”,
“sessionCount”: “”,
“clientCount”: “”,
“duration”: “”,
“totalUsage”: “”,
“usage”: “”
Paulie says
Because you are a newcomer, it would be quite difficult to explain, but fortunately it is very easy to do. I’ve just made another very quick video which shows the additional steps you need to take in order to add the results to a SharePoint list. I didn’t edit the video, so it is a bit rough and ready.
https://youtu.be/nPlnCDaud5M
Jericho Rosales says
Thank you Thank you So much. this is very helpful. I was able to do it in the help of you video. thank you for putting it together so quickly. Wow. You saved my day. Now what i need figure out is how to get the CSV Data from SharePoint document library that is my trigger and get the data of that CSV file and connect your code from that to be saved in my sharepoint list. my goal is when a new email arrived in shared box save it to my sharepoint list document folder.. (which i already resolve) then get the data of that CSV file (connect your flow tutorial) to save it to sharepoint list. I dont know if i can do it but i will try.. i’ve been sitting in my chair for 7 hrs now. finally i have lead to follow. thanks again.
Ted Chapin says
When I try to split the file contents
split(outputs(‘Get_file_content’)[‘body’], decodeUriComponent(‘%0A’))
I get
“InvalidTemplate. Unable to process template language expressions in action ‘Compose’ inputs at line ‘1’ and column ‘6460’: ‘The template language function ‘split’ expects its first parameter to be of type string. The provided value is of type ‘Object’. Please see https://aka.ms/logicexpressions#split for usage details.’.”
Paulie says
Take a look at your run history – see what the raw outputs of your get file action were.
Paulie says
It doesn’t seem like it recognises it as a text file. What’s the file extension? Renaming it might work. If not, you could base64 decode it with the base64decode function.
Ted Chapin says
The file extension was .csv. It contains ascii data not binary. I changed the extension to .txt and now the Get file content output has text. Was not expecting that. Thanks for the fast replies.
Paulie says
Well done! Good luck
Ted Chapin says
I got the compose to work with the .csv file extension using
split(base64ToString(outputs(‘Get_file_content’)[‘body’][‘$content’]), decodeUriComponent(‘%0A’))
I used base64ToString not base64Decode and had to throw in a [$content] after the [body]
but it should not care what the extension is if the content is ascii.
Paulie says
Well done, it shouldn’t care, but the reason I mentioned it is because I’ve seen it be fussy about these things before. Glad you got it working.
Sven says
When i turn on the concurrency control the output of the csv is not correct – any idea why?
Jesse says
My final Compose can’t find the JSON Compose output. Everything was working great up to this point.
Jesse says
Watched the video…figured it out! Thanks!!!
Daniela Holzner says
Hi Paulie,
Thanks for your blog and videos. Any chance you can show how to do this, when the CSV is received via email?
Many systems generate reports by emailing a CSV. When I use the ‘received email’ as a trigger and ‘get attachments’, the remaining actions need to be within an ‘apply to each’ action (since there could be multiple attachments). BUT when doing this, the CompleteJSON compose action now errors, due to too many nested loops…
“The action ‘JSON’ is nested in a foreach scope of multiple levels. Referencing repetition actions from outside the scope is supported only when there are no multiple levels of nesting.”
I tried working with an ‘append to array’ variable instead, but couldn’t get it to work. It would be great if you could show how to do this! 🙂
VS says
YOU ARE A GOD! Thank you for all the insight. Simple. Effective. Efficient flow.
You rock Paulie!
Virakone says
Hi Paulie,
I have a value in my csv that has a comma inside it , ex. “Big Company Inc., Ltd.” – how would you recommend handling this?
Johan Magerman says
Hi Paulie,
Great video!
But I have the same issue as Daniela, would like to start my flow by receiving an email and loading the attached csv.
And the the for each loop of your flow is nested in the foreach that loops trough the attachments.
Paulie says
I can help you do what you want to do. Perhaps we could have a teams session, record it and share it for everyone else to see.
Johan Magerman says
Hi Paulie,
Still struggling to get the flow working like I want it to.
I’ve managed to work around my previous problem by using the method in your updated video to parse a csv.
But now I’m stuck after that, cant seem to loop trough it. I get an error now saying that the flow got a object where it expected an array.
You can contact me if you’d like to have a look for sure!
What I want to do is receive an email with a csv, parse the CSV and send a separate mail for each line of the csv containing the info from that line.
Already thanks for your video’s the’ve already helped me a lot!
Paulie says
Hi Johan, get in touch via the contact form and I will see if I can help you out:
Scot Bickell says
Hi Paulie,
Thanks so much for sharing this and other tips for Power Automate, I have found several of your posts to be very useful.
I have a csv file that has a comma within the field name and data and it is causing this issue:
Field Names:
[
“\”Last”,
” First Name\””,
“\”Employee EIN\””,
“\”Date\””,
“\”Hours\””,
“\”Is Time Off\””,
“\” Locations Full Path\””,
“\”Time Off Name\””,
“\”Default Department Full Path\””
]
Data:
{
“\”Last”: “\”Barton”,
” First Name\””: ” Edmond\””,
“\”Employee EIN\””: “\”123-45-6789\””,
“\”Date\””: “\”11/16/2021\””,
“\”Hours\””: “\”8:00\””,
“\”Is Time Off\””: “\”Y\””,
“\” Locations Full Path\””: “\”Midwest/Missouri/Kansas City\””,
“\”Time Off Name\””: “\”Sick/Personal\””
},
{
“\”Last”: “\”Churchill”,
” First Name\””: ” Winston\””,
“\”Employee EIN\””: “\”123-45-6789\””,
“\”Date\””: “\”07/21/2021\””,
“\”Hours\””: “\”8:00\””,
“\”Is Time Off\””: “\”Y\””,
“\” Locations Full Path\””: “\”Northeast/Michigan/Lansing\””,
“\”Time Off Name\””: “\”Vacation\””
},
{
“\”Last”: “\”Stafford”,
” First Name\””: ” Edward\””,
“\”Employee EIN\””: “\”123-45-6789\””,
“\”Date\””: “\”08/27/2021\””,
“\”Hours\””: “\”8:00\””,
“\”Is Time Off\””: “\”Y\””,
“\” Locations Full Path\””: “\”Midwest/Missouri/Kansas City\””,
“\”Time Off Name\””: “\”Vacation\””
}
Is there a method that will allow me to “escape” the comma that is part of the data, but still use the flow for data that might not have embedded commas?
Scot says
I figured out a workaround by using the replace() function before using splitByLines:
replace(outputs(‘Get_file_content’)?[‘body’],’, ‘,’ ‘)
Paulie says
Well done Scot! There is always a way!
Ralf Alpert says
Great blog, thank you, Just want to also mention that I got
I got the compose to work with the .csv file extension using
split(base64ToString(outputs(‘Get_file_content’)[‘body’][‘$content’]), decodeUriComponent(‘%0A’))
Same as Ted.
Asif Khawaja says
Hi Paul, great article and thank you for sharing the wisdom and the flow export. When I run this flow to create items in SharePoint list, with 1000 items it works fine but when I increased the items to 10,000, the Apply_to_each loop/create item timed out. I used the same sample data as yours. Any ideas how to deal with this timeout issue? Thanks.
MarkSompton says
Thanks, this is a really clear guide but I’m having problems.
I had the “treated as an object and not string” issue as well but after applying the base64toString function I end up with “\u0000” before every character in the output and when I tried using a replace function to get rid of them it didn’t do anything.
I can see that my csv is formatted in UTF-16 and not UTF-8 which is probably the root of the problem but I can’t figure out a workaround, any ideas?
Also the csv seems to be delimited by a tab which is fine but I can’t figure out what character is being used to denote new lines because there are new lines in some of the data already. I can open the csv in Excel without any problems so there is clearly some distinction that Excel can read but I can’t find.
Geraldo José Giansante says
Hi Paul
Thanks for your post, this is a clear guide, a great explanation on your video:
I was able to reproduce the flow Getting File from OneDrive for Business.
I´m a beginner at Power Automate and I was not able to derivate your solution from OneDrive to SharePoint File. I´ve use Get File Content from SharePoint using Path but the step splitNewLine, coded as follow did not work:
split(outputs(‘Get_file_content_using_path’)?[‘body’]?[‘body’],decodeUriComponent(‘%0A’))
The full message was:
Unable to process template language expressions in action ‘splitNewLine’ inputs at line ‘1’ and column ‘10795’: ‘The template function ‘split’ expects its first parameter to be of type string. The provided value is of type ‘Null’. Please see https://aka.ms/logicexpressions#split for usage details.’.
I´ve seen aka.ms but was not able to solve.
I hope you could help me!
Best regards
Geraldo (from Brazil – São Paulo)
Karl Nixon says
Hi Paulie, thanks for this blog – very helpful. A couple of questions;
Is there a way to exclude records where the 1st (or nth) character includes a certain values(s)
Is there a way to exclude certain columns (my data has 30 columns but only 3 are required)
Best regards
Karl (Sydney, Australia)
Jason Hinkle says
Hey Paul,
Thanks again for the video. I know so much more than I did before watching it. I am still having an issue with the last step pulling the data out of the loop. You had mentioned in Youtube comments that it was because I had multiple loops and you are correct. I used “When a new email Arrives v.3” as a trigger, hoping that I would be able to pull out the attached CSV. that is coming from a scheduled export. I was able to do this but when I pull the attachment. It puts it in an apply each automatically (My guess would be to apply to all attachments in the email). Originally there was 4 attachments 3 of which were not csv files so I used a condition action to say if the attachment name contains csv run. If not stop. I was able to edit my export to only have the csv that I need but it still requires me to put in an apply to all when pulling attachments. Any suggestions would be greatly appreciated, but even pointing out the problem has helped greatly so I appreciate you so much for even putting this all together.
Paulie says
Hi Jason,
What you need to do is use a filter action on the attachments array to filter it for attachments that are CSVs. Then you can check the length of the array to check that the length of it is one. Then you can use the first expression to get the remaining attachment. Then you’ll have no apply to each loops. If you get stuck let me know and I will help you out.
Judit says
Hi Paulie,
I’m very beginner in Power Flows. My final Compose can’t find the JSON Compose output.
Gary says
I appreciate your instructions – I’m attempting to parse my contents to load each row of my CSV into a SharePoint site, but running into an error: Invalid type. Expected Object but got Array”.
The contents are only available as dynamic if my schema type is “object”. If I change to array, only my JSON body is available. Do I need to add another step to parse the body of the JSON or am I doing something incorrectly?
Andrew Miller says
This is great, thanks so much. Does anyone have any advice for how to do the same thing to a file with tab-separated values?
Paulie says
Hi Andrew,
Instead of:
split(item(), ',')
You could use:
split(item(), decodeUriComponent('%09'))
That should give you a tab split.
Andrew MacLean says
Hi Paulie,
Thanks for this fantastic write up. I was able to do a really useful automation for our organization following this guide which sends Teams messages to users listed in the CSV file when their passwords are about to expire.
Quick question for you around the skip function. My CSV file updates daily and the number of lines changes every day. I’m skipping the first line (headers) same as you did above. Say for example the file has 25 lines, the flow runs fine on lines 2-25 but then runs on line “26” which is blank and logs it as a failed run overall, even though it successfully ran on lines 2-25.
Is there any way or logic to use the skip function to skip the last line? Or “skip if blank” kind of logic I could use? or even just “stop running when you reach the end of the data”?
Paulie says
Hi Andrew,
There are two ways that I can think of to do this. You can adjust the skip expression and combine it with the take expression. Here is an example which skips the first two lines and omits the final line:
take(skip(outputs('splitNewLine'),2), sub(length(skip(outputs('splitNewLine'),2)),1))
A more reliable method would be to put the output of the splitNewLine action and pass it through a Filter Array action. You would use the output of the splitNewLine action as the input for the filter. Then on the left side of the filter you would use the expression:
length(item())
and choose is greater than and specify 0 as the value. This will remove any blank line. Then you would need to adjust the apply to each to use the output of the filter array action as it’s input.Both methods should work though.
Navi says
Hey Scot,
You mentioned that, to escape comma inside double quotes you used replace function. But, when I tried to use it it is showing invalid expression error. Could you please explain about which character you replaced in the string with what?
Thanks in advance.
Navi (Düsseldorf, Germany)
Laurent says
Thank’s Paulie, it worked out well (with the two tips provided in the comments about windows delimiter and the tip for suppressing the last empty line)!
But now I’m having a hard time trying to put the result of the json array into an excel table. I use “apply at each”, but it seems the expected input cannot be directly the object resulting from the json parsing. Any idea?
Aaron says
This is fabulous. How do I take this into excel in sharepoint? Would this work for files greater than 5MB ?
Dan says
Hi Paulie, great content as always! I read this a while ago but have only just had a use case but for some reason when I view the output after the first split (splitting by CRLF) my first row of data is still on the row with the column headings. All other rows appear as their own row as expected. I took a look at the file in Notepad++ and turned on all symbols and noticed that the end of the first row with the column headings has NUL CR LF where as the other rows just have CR LF any idea how I can get that first row seperated from the headings?
Dan says
Ignore my comment, even though it didn’t look right in the output of the split, it worked as expected when I continued. Awesome stuff!!
Jean-Marie Berger says
Hi Paul, thank you for the trick, it works very well here ! Just on issue with my project as I use csv files that have special characters embedded (accents, …). Those are changed right away by the UTF8 replace character (#EFBFBD) as soon as I get the file body. Of course, it is transported to my outputm … Any power automate trick to address this ?
Thank you !
Aaron says
Hi Paul, I tried using pipe delimiter to spit csv as my csv has columns with commas and this is throwing things off. Any help please ?
Thanks.
marcos Daniel ruiz says
Thank you very much, that’s what I was looking for. Greetings from Argentina
Shawn Keene says
I’ve got the same issue as a few others above. You mentioned ‘Then you can use the first expression to get the remaining attachment.” but I cannot seem to make this work. I’m probably just getting the syntax wrong. I have this:
Trigger: Email Arrives with an attachment.
Next step: Apply to Each – Get Attachment (there will only be one but this is required)
Then I need to convert that content bytes to a CSV, so I use “create CSV table” option (this comes just before Paul’s “split new lines” compose step).
But my “create CSV table” step fails for everything I’ve tried (entirely my fault I’m sure). I’ve tried things like:
first(items(‘Apply_to_each’)?[‘contentBytes’])
first(‘Get_Attachment_(V2)’)?[‘contentBytes’]
first(outputs(‘Apply_to_each’)?[‘body/contentBytes’]
first(triggerOutputs()?[‘body/contentBytes’])
What am I missing?
cmdrkeene says
Update: I got it, my Get_Attachment step uses this function for the AttachmentID: first(triggerOutputs()?[‘body/attachments’])?[‘id’]
And then the next step converts it into a text string (CSV text basically) with the following compose step, after which I can continue on as described in this blog beginning with “split new lines”.
base64tostring(outputs(‘Get_Attachment’)?[‘body/contentBytes’])
Sam says
Thanks so much!! How do you deal with very extensive CSV’s? I have one of them with 70K rows and Apply to each is limited to 5K. Thank you!
Felipe Reck says
Hey Paulie.
Thanks for this tutorial. I´m quite new to Power Apps and Power Automate.
I follow the instructions, It worked fine until the Parse JSON action, where I got the following error message after testing the flow:
“Unexpected token encountered when reading schema. Expected StartObject, Boolean, got StartArray. Path ”, line 1, position 1″
Could you help me understanding this?
Felipe Reck says
My previous comment was actually based on the tutorial available on
https://www.youtube.com/watch?v=sXdeg_6Lr3o
and not the tutorial on this page.
Sorry for my mistake.
MJ says
I have problem with parsing CSV files enconded in UTF-16 LE. Is there any option to tackle it using Power Automate online?
akhil mishra says
Hi Paul,
Need help to create flow to remove white space from csv file content,i’m new in power automate
Gowri says
I found this very useful. Simple and very effective approach. Thank you for sharing knowledge. Much apprecaited.
vitss says
You are a hero!
Jared says
Scot you are the man!
This problem has been stumping me all week, thank you for the help
Bradon says
Hi Paul,
Is there a way to stop the split function from splitting by commas that are within quote marks?
My CSV has multiple entries separated by commas in between two /”.
However, when you split, it ignores the quotes and splits the data that should be on one line into two.
Is there a way around this?
Thanks
Bradon says
Hi again,
Disregard my last comment.
I found a solution to this issue elsewhere.
I would explain how I fixed it for anyone else, but I’m not quite sure myself…
Cheers
Hayan Saleh says
Hi , very helpful solution thank you , the output of completeJSON is not listed if I create a compose action outside the loop , it will show inside the the loop, what am I missing here?
Thanks
Sam Ivess-Mash says
Hi Brandon – Please can you share where you found the resolve to the issue of splitting on buried commas as it has stumped me for far too long and I cannot find a resolve!
Mark says
I spent a day trying to figure out how to do this, and watched a ton of click-bait videos that offered no scalable solution. This is so easy and amazing, thank you!
Rick says
Did you ever get around to the version doing this direct from email? If so, is there a link anywhere to it?
Andrew Connell says
I think this is the most useful thing I have seen in PowerAutomate. Ever. Immensely clever way about it as well. Thank you!!! Total genius.
Paulie says
Thank you! I appreciate the kind words.
Hugh Livingston says
Hi.
This is such a helpful solution and I’ve managed to quickly apply this to a problem I’ve had recently. It has also helped with problems I’ve had in the recent past so many many thanks for posting!
Paulie says
Thank you Hugh
Gianluca Romano says
Hi,
to distance of years your video is still the best, congratulation.
One question, how can I afther the last step (creation of the json structure and variables) create and filter an array based on a value?
Paulie says
Use a filter array action.
Gianluca Romano says
Ok, in array filter can I use “json output” in field “from”? What is the syntax for the value” in “field”?thank in advance
Aaron says
Wow, this is probably the best walk-through I could find online thank you for taking the time to explain each step! I have one question though, as this didn’t work with my CSV file. That’s because the CSV file I obtain from an application, contains information in a column that has carriage returns, double quotes, and commas in them. That means it looks similar to this:
“Person”, “Date”, “Amount”, “Comments”, “ID”
“Mike Leavy”, “7/10/2023”, “5,345”,”This is the first line in this field
This is the second line
and so forth”, “788”
“Jen Carol”, “5/3/2022”, “124”, “This is another field
where people hit enter and use commas and “quotation marks”,
so the output of the csv in this field has carriage returns and commas”, “789”
Now when you open the CSV in a text editor, it appears exactly like this:
“Person”, “Date”, “Amount”, “Comments”, “ID”\n
“Mike Leavy”, “7/10/2023”, “5,345”,”This is the first line in this field\nThis is the second line\nand so forth”,”788″\n
“Jen Carol”, “5/3/2022”, “124”, “This is another field\nwhere people hit enter and use commas and “”quotation marks””,\nso the output of the csv in this field has carriage returns and commas”,”789″\n
So what’s happening is everywhere a quote appears in that field, the CSV puts double quotes, and commas appear inline exactly as they type, because each field is surrounded by double quotes. However, when I search and split the entries on “\n” or “\r\n” or any type of carriage return within power automate, it splits the line right where the first \n appears in the “Comments” field, breaking the rest of the import and pumping out an error. Just wondering if you have any advice for this type of complex CSV file that a very popular application generates but does not allow me to modify? Thank you!!
Ricky says
Hi Aaron,
I’m not Paulie but ran into a similar issue, specifically for commas — you can potentially check out my solution here if it maybe provides some additional direction: https://powerusers.microsoft.com/t5/Building-Flows/How-to-split-amp-parse-CSV-file-with-comma-inside-a-value/m-p/908676#M127948
My solution was a combination of another solution reply in a seperate thread.
I’m certain there is a more clever solution out there, this one is not pretty by any means, but it was a quick fix to check if the array split on a real csv delimiter comma or a comma used in a multiline text or currency value.
Fredrick Smith says
Hi Paulie,
I’ve been binge-watching all your videos, so thank you very much for sharing all your knowledge! This may be a simple question but unfortunately, I’m yet to develop your power auto-mate skills. I currently have text in a CSV format but I want to create a Table with this text using PowerAutoMate, but I’m struggling a bit I have searched online but was wondering if you could give me a point in the right direction?
Paulie says
I have a video and a blog post on this very subject. Have you seen them? If so, which part are you struggling with?
Fredrick Smith says
I haven’t, if you could link it that would be great and I’ll have a look. Again thank you very much for the fast reply and all your content!
Michelle M. says
Hi Paulie,
Your videos have been extremely helpful. Got a question, though. I have a text file for input. Each line of the file represents a different set of data. I know the names of each element for each line and there are a different number of elements for each line. Do I have to create a JSON statement for each line separately or is there some way to loop through the data (i.e., if line 3 use this JSON, if line 4 use this other JSON, etc.)?
Paulie says
You can do what you need to do with a fancy select statement that makes use of the range() function. It’s not really easy to do though.
Saeed says
Hi Paulie,
Thank you for the explanation .. what if i want to take this Json output and convert it to Excel file? can you please explain this too?
Jakub Holásek says
This is good start but it is not efficient, anything that has to go through loop is inefficient.
Imagine 100k plus rows, even with concurrency it would take fairly long not even speaking of day limits of actions.
What to do instead?
Select – data action, input would be array based on string split by \r\n and then in custom output you would define the proper structure similar way you did before,wrapped with json function
Jakub says
To give you example how to do it completely without loop and save time (as addition to myp revious comment):
{
“inputs”: {
“from”: “@skip(split(variables(‘activityHistory’),decodeUriComponent(‘%0A’)),4)”,
“select”: {
“Object Type”: “@{if(greaterOrEquals(length(split(item(),’,’)), 1), json(string(split(item(),’,’)[0])), null)}”,
“File Name”: “@{if(greaterOrEquals(length(split(item(),’,’)), 2), json(string(split(item(),’,’)[1])), null)}”,
“Object Name”: “@{if(greaterOrEquals(length(split(item(),’,’)), 3), json(string(split(item(),’,’)[2])), null)}”,
“Description”: “@{if(greaterOrEquals(length(split(item(),’,’)), 4), json(string(split(item(),’,’)[3])), null)}”,
“User Name”: “@{if(greaterOrEquals(length(split(item(),’,’)), 5), json(string(split(item(),’,’)[4])), null)}”,
“Activity”: “@{if(greaterOrEquals(length(split(item(),’,’)), 6), json(string(split(item(),’,’)[5])), null)}”,
“Status”: “@{if(greaterOrEquals(length(split(item(),’,’)), 7), json(string(split(item(),’,’)[6])), null)}”,
“Timestamp”: “@{if(greaterOrEquals(length(split(item(),’,’)), 8), json(string(split(item(),’,’)[7])), null)}”,
“Previous Value”: “@{if(greaterOrEquals(length(split(item(),’,’)), 9), json(string(split(item(),’,’)[8])), null)}”,
“Current Value”: “@{if(greaterOrEquals(length(split(item(),’,’)), 10), json(string(split(item(),’,’)[9])), null)}”
}
}
}
Andrew says
Hi Paulie
Thank you for the excellent video! I’m having trouble with the splitNewLine action:
” InvalidTemplate. Unable to process template language expressions in action ‘splitNewLine’ inputs at line ‘0’ and column ‘0’: ‘The template language expression ‘split(outputs(‘Get_file_content’)?[‘body’], decodeUriComponent(‘%0D%0A’))’ cannot be evaluated because property ‘body’ cannot be selected. Array elements can only be selected using an integer index. Please see https://aka.ms/logicexpressions for usage details.’. ”
My source file is stored in Sharepoint, hence the ‘Get_file_content’ action, as per your note in the instructions.
Any tips greatly appreciated!
Paulie says
What is the file extension of the file in SharePoint? I think perhaps the content type is not being inferred and your body content is base64. Try renaming the file to .txt and see if you get any better results.
Andrew says
Thanks so much for the speedy reply! The original file is a csv, I tried changing to txt but I’m still getting the same error. The content type is being inferred in the Get_file_content action. If it helps, the file is in Windows (CRLF) format / UTF-16 LE
Paulie says
I’m not sure, use the get in touch form and I will have a look with you tomorrow.
Eric says
Andrew & Paulie, please post a solution if you found one. I am getting the same “Array elements can only be selected using an integer index.” error.
Andrew says
Sorry Eric I’ve had to abandon it for now I’m afraid
Philip Rivard says
Hello Paulie,
I get stuck to the second compose action where you use a “return carriage” line to delimit each row. I don’t know if it is the new PowerAutomate version but this is not allowed anymore. I got an invalid parameter…
Any solutions ?
Thanks,
andrew says
Hi Scot
how did you add the expression for the replace
Scot says
JULY 10, 2021 AT 2:32 AM
I figured out a workaround by using the replace() function before using splitByLines:
replace(outputs(‘Get_file_content’)?[‘body’],’, ‘,’ ‘)
LS says
For anyone else getting the error that file content cannot be split because it is an object and not a string, the solution found on this reddit thread worked for me: https://www.reddit.com/r/MicrosoftFlow/comments/t6ssl0/getting_an_error_when_i_try_to_split_a_csv_error/
split(base64ToString(outputs(‘Get_file_content’)?[‘body’][‘$content’]), decodeUriComponent(‘%0D%0A’))
Aadil Ansareen says
This was amazing, thank you!
adnuka says
Hi All,
Do you know how to append these values into the SQL Database by using the Insert Row V2 action? I changed CSV to JSON. After that, I tried to insert these values to SQL table. I’ve tried multiple shots and could not succeed in any of them.