How do I convert this PowerShell code to C#

-1

I need to convert this piece of code to C# and I am not familiar with PowerShell script to understand what this is doing and then write it in C#. It is basically taking an integer and returning a corresponding tag.

I tried to write it myself but just did not seem right.

param
(
   [ Parameter( Mandatory = $true ) ]
   [ int ]
   $Tag
)

$maxNumericTag = 0x0000FFFF;
$minOldSchemeHighByteValue = 36;
$symbolSpace = "abcdefghijklmnopqrstuvwxyz0123456789";

if( $Tag -le $maxNumericTag )
{
   return $Tag;
}
elseif( $minOldSchemeHighByteValue -le ( $Tag -shr 24 ) )
{
   return [ char ]( $Tag -shr 24 -band 0xFF ) + [ char ]( $Tag -shr 16 -band 0xFF ) + 
          [ char ]( $Tag -shr 8 -band 0xFF ) + [ char ]( $Tag -band 0xFF );
}
else
{
   return 
       $symbolSpace[ $Tag -shr 24 -band 0x3F ] + $symbolSpace[ $Tag -shr 18 -band 0x3F ] + 
       $symbolSpace[ $Tag -shr 12 -band 0x3F ] + $symbolSpace[ $Tag -shr 6 -band 0x3F ] + 
       $symbolSpace[ $Tag -band 0x3F ];
}
c#
asked on Stack Overflow Aug 21, 2019 by Nisha Shukla • edited Aug 22, 2019 by halfer

1 Answer

2

I spent the whole night thinking about this posting. I knew the algorithm looked familiar (some sort of IBM encoding), but what was really bothering me was the input and output. Finally realized the input and outputs needed to be an array of 4 bytes. Then I realized what the code was actually doing. The alphabet was the first 26 characters and the digits the next 10 characters. Max size of each byte was 0x3F.

ASCII the first 48 characters are special characters with the digit zero being 48 (0x30). So to determine if old format (IBM) or new format (ASCII) is being used the person who wrote code used the highest byte and if less than 36 it was old format and greater new format.

To convert a digits (26 to 35) from IBM to ASCII (48 to 57) I subtracted 26 then added 48 (the character '0').

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
        }
        static byte[] Script(byte[] byteTag)
        {
            uint tag =  (byte[0] << 24) | (byte[1] << 16) | (byte[2] << 8) | byte[3];
            uint maxNumericTag = 0x0000FFFF;
            int minOldSchemeHighByteValue = 36;// 36th characters in symbolSpace
            byte[] results = new byte[5];
            results[4] = '\0';  //null terminate if less than 5 characters. 

            if (tag <= maxNumericTag)
            {
                results = byteTag;
            }
            else
            {
                if (minOldSchemeHighByteValue <= (tag >> 24))
                {
                    results = byteTag; //script is just converting an int to a byte[]
                }
                else
                {
                   results[0] = (byte)((((tag >> 24) & 0x3F) - 26) + (byte)'0');
                   results[1] = (byte)((((tag >> 18) & 0x3F) - 26) + (byte)'0');
                   results[2] = (byte)((((tag >> 12) & 0x3F) - 26) + (byte)'0');
                   results[3] = (byte)((((tag >> 6) & 0x3F) - 26) + (byte)'0');
                   results[4] = (byte)(((tag & 0x3F) - 26) + (byte)'0');
                }
            }
            return results;
        }
    }
}
answered on Stack Overflow Aug 22, 2019 by jdweng • edited Aug 22, 2019 by jdweng

User contributions licensed under CC BY-SA 3.0