Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The problem is that PowerShell uses aliases that match UNIX command line utility names

Agreed.

> If someone is not aware of the difference and they use PowerShell to process production data, their data could be corrupted without them realizing it.

Someone coming from UNIX-land shouldn't be making PowerShell scripts day1. They really should be diving into documentation. Microsoft's documentation on PS is _excellent_ and here is one about pipelining: https://learn.microsoft.com/en-us/powershell/module/microsof...

Notably:

> To support pipelining, the receiving cmdlet must have a parameter that accepts pipeline input. Use the Get-Help command with the Full or Parameter options to determine which parameters of a cmdlet accept pipeline input.

then further down:

---

Using native commands in the pipeline

PowerShell allows you to include native external commands in the pipeline. However, it is important to note that PowerShell's pipeline is object-oriented and does not support raw byte data.

Piping or redirecting output from a native program that outputs raw byte data converts the output to .NET strings. This conversion can cause corruption of the raw data output.

As a workaround, call the native commands using cmd.exe /c or sh -c and use of the | and > operators provided by the native shell.

---

To work with external commands and pipelining, you can refer to this SO (top answer explains it very well): https://stackoverflow.com/questions/8097354/how-do-i-capture...



> Piping or redirecting output from a native program that outputs raw byte data converts the output to .NET strings. This conversion can cause corruption of the raw data output.

And this behavior is terrible and one of many "gotchas" that PowerShell has. Fortunately, there is a chance it will be "fixed": https://github.com/PowerShell/PowerShell/issues/1908#issueco...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: