Buy Me a Coffee

Buy Me a Coffee!

Wednesday, October 11, 2017

Converting a datetime to a Client Specified string

How do you handle converting a couple of dates with times to a string for use in a report when the client has specific requirements around formatting that don't align with any of the TSQL standards?  You dig in and go hunting.  Specifically, the client wants the start and end date time values to be displayed in the header of a report as one of the following:

October 11, 2017 18:06 - 21:06
October 11, 2017 18:14 - October 12, 2017 07:14

See what I did there?  Notice that the month is fully spelled out?  See how we collapse the date if the times are on the same day?  Here is the code to do it:

The main trick is to just keep pulling off parts and representing them appropriately.

Wednesday, July 19, 2017

Reading Structured Binary files in C#: Part 17

Here we go again.  The last bit we did was pulling out the metadata stream headers which spat out this list:

Starting Address: 0x290
iOffset: 0x6C
iSize: 0x104
rcName: #~

Starting Address: 0x29C
iOffset: 0x170
iSize: 0x100
rcName: #Strings

Starting Address: 0x2B0
iOffset: 0x270
iSize: 0x1C
rcName: #US

Starting Address: 0x2BC
iOffset: 0x28C
iSize: 0x10
rcName: #GUID

Starting Address: 0x2CC
iOffset: 0x29C
iSize: 0x50
rcName: #Blob

for the HelloWorld_CSC_2.0.exe file I tested it with.  The first stream on the list is named #~ which means it is going to be fun to deal with because it is a compressed metadata stream.  The .NET IL Assembler book has a nifty drawing showing all 5 of the streams above:


and looking at the far right, you can see that we have yet another header to dissect.  The structure of the header is:

Size
Field
Description
4 bytes
Reserved
Reserved; set to 0.
1 byte
Major
Major version of the table schema (1 for v1.0 and v1.1; 2 for v2.0 or later).
1 byte
Minor
Minor version of the table schema (0 for all versions).
1 byte
Heaps
Binary flags indicate the offset sizes to be used within the heaps
.•        4-byte unsigned integer offset is indicated by 0x01 for a string heap, 0x02 for a GUID heap, and 0x04 for a blob heap
.•        If a flag is not set, the respective heap offset is a 2-byte unsigned integer
.•        A #- stream can also have special flags set: flag 0x20, indicating that the stream contains only changes made during an edit-and-continue session, and flag 0x80, indicating that the metadata might contain items marked as deleted.
1 byte
Rid
Bit width of the maximal record index to all tables of the metadata; calculated at run time (during the metadata stream initialization).
8 bytes
MaskValid
Bit vector of present tables, each bit representing one table (1 if present).
8 bytes
Sorted
Bit vector of sorted tables, each bit representing a respective table (1 if sorted).


It is all nice and fixed, so it will be easy to get started so let's dig right into the metadata stream header for the metadata...what?  How about the metadata metadata stream header?  All this naming stuff is hard.

So, here is the code:
But there is a catch.  The bits in the MaskValid represent tables, and this is followed by a bunch of 4 byte unsigned integers for each '1' representing the number of records in the table.  I will work on that part next, this is good enough for now.

Keep compiling!

Thursday, June 1, 2017

Great open source project for testing SharePoint email: MailCatcher

I had to configure SMTP on a SharePoint server where QA and Developers need to view the email output.  Normally I use SMTP4Dev to simulate my SMTP servers for development, but in this case I need to have the QA and Developers have access to the email without direct access to the server.  I looked around and found MailCatcher, a Ruby based program that acts as a SMTP server and web application.  To support MailCatcher I needed to install Ruby using the Ruby Installer for Windows.  The installation of Ruby is quite easy, and once it is installed the installation of MailCatcher simply requires a simple command:

gem install mailcatcher

Once it is running, I get a very pretty interface to give to my Developers and QA:


Tuesday, May 23, 2017

Open PowerShell as another user

This comes up quite often if you play around with PowerShell and SharePoint.  Just because you have Administrative rights in SharePoint, it doesn't follow that you can access a site programatically from PowerShell.  For that, you need to be granted Shell Admin rights on the content database that holds the site.  There are some accounts in SharePoint that are more privileged than others, and the farm account is one of those.  What you can do, assuming that you know the farm account credentials, is open up a PowerShell window as the farm account and run the Add-SPShellAdmin account with your credentials.  To open PowerShell as a different user, right click on the SharePoint 2013 Management Shell (PowerShell with Windows.SharePoint.PowerShell already loaded) shortcut, Shift+right click on the SharePoint 2013 Management Shell in the context menu, and choose Run as a different user:
Open PowerShell as a different user
That's it!  Short and sweet.

Wednesday, May 10, 2017

Quick little SQL trick: A Reverse Like

Ok, quick one here but a good one.

Background: This came up when dealing with SharePoint Content Enrichment Web Service (CEWS) output. CEWS is a hook into the SharePoint Search crawler that gets called for every item that is crawled. We are wanting to store this in a database for reporting purposes. CEWS gets a list of Managed Properties, one of which can be the Path to the item. The question was, how can we tie this back to the site? Well, in our case we have Managed Properties on the site (more on this later, I promise) and we keep a site list in a table. The site list has the URL for the site, and all items are going to have paths that extend this URL. So, we want to do a like comparison between the path sent in and a column in the site table.  Here is what we are doing:

select * from SiteTable where '<path sent in for CEWS>' like URL+'%' 

See, the path sent for the crawled item is going to contain the URL from the SiteTable.  Job done.

Tuesday, May 9, 2017

Reading Structured Binary files in C#: Part 16

Time to get in the streams!  The next piece of the puzzle is the Metadata Storage Header Structure:

Type
Field
Description
BYTE
fFlags
Reserved; set to 0
BYTE
[padding]
WORD
iStreams
Number of streams
Once we load that, we will know how many Metadata Stream Header Structures follow:

Type
Field
Description
DWORD
iOffset
Offset in the file for this stream.
DWORD
iSize
Size of the stream in bytes.
char[32 ]
rcName
Name of the stream; a zero-terminated
ASCII string no longer than 31 characters
(plus zero terminator). The name might be
shorter, in which case the size of the stream
header is correspondingly reduced, padded to
the 4-byte boundary.

So let's get to it.  I got those structures from the .NET IL Assembler book Chapter 5, which is all about the Metadata Tables.  A Whole Chapter.  I don't think I will get through it this post, but I will try and get it fully covered this week.  Anyway, the first bit is dead easy.  Here it is:

In the Stream Headers, the rcName is another variable length string.  But it is bound, so I can easily use the standard struct deserialization to load the data.  The only tricky part will be to find out where to start!  Ok, I know that the Storage Header follows the Storage Signature which is pointed to by the CLRHeader, so let's just grab that and add in the length of the pVersion which is stored in the iVersionString.  Easy enough. 

Well, I didn't read the part above closely enough.  I skipped right over the 'the size of the stream header is correspondingly reduced' part.  I pulled the first of the 5 with no problem, but the rest were trash.  So, another encapsulated struct.  But this time I don't have a length, just a max length.  I will just need to loop and keep going until I read a 0. 

That almost worked.  I need to read chunks of 4 bytes (the padding remember?) and then it works fine.  Here is my implementation:

Well, I learned a bit and was able to re-use a new (to me) technique which means I might be able to retain it. That will do for tonight, more tomorrow.
Keep reaching for the heights.

Reading Structured Binary files in C#: Part 15

Time to dig into the Metadata.  The first part is the GeneralMetadataHeader which is defined in the .NET IL Assembler book as:

Type 
Field 
Description
DWORD  
lSignature
“Magic” signature for physical
metadata, currently 0x424A5342, or,
read as characters, BSJB—the initials
of four “founding fathers” Brian
Harry, Susan Radke-Sproull, Jason
Zander, and Bill Evans (I’d better
make that “founders;” Susan might
object to be called a father), who
started the runtime development in
1998.
WORD
iMajorVer
Major version (1)
WORD
iMinorVer
Minor version (1)
DWORD
iExtraData
Reserved; set to 0
DWORD
iVersionString
Length of the version string
BYTE[]
pVersion
Version string

So, where does it start?  If we look back in the CLRHeader, we see the Metatadata field which is an RVA.  Let's see if we can get some data out.  From the beginning, things are not going to be easy.  The iVersionString gives the length of the pVersion which means that a simple deserialize is out.  I am going to create the top 'fixed' portion of the header as normal and then deal with the pVersion string separately.  That means that I am going to use an intermediate structure and implement the actual GeneralMetatadataHeader as a class.  Here is my implementation:

Note that the constructor for the type actually reads in the variable length string.  I might go back and implement the CodeViewHeader in a similar way as this is actually cleaner than I was expecting.  That is enough for tonight, I will get into the actual Metadata streams in the next post.

Keep your code clean!

Sunday, May 7, 2017

Reading Structured Binary files in C#: Part 14

In the last post, we read the DebugDirectory from the .text section of the file and found a pointer to a CodeView header.  Let's read it!  First, we need to find out what it is!  I have been relying on the .Net IL Assembler of late, and didn't think to look back in our old trusty MS PE COFF spec to see if it had a definition of the DebugDirectory, but it does!  Here is the spec from there:

Offset
Size
Field
Description
  0
4
Characteristics
Reserved, must be zero.
  4
4
TimeDateStamp
The time and date that the debug data was created.
  8
2
MajorVersion
The major version number of the debug data format.
10
2
MinorVersion
The minor version number of the debug data format.
12
4
Type
The format of debugging information. This field enables support of multiple debuggers. For more information, see section 6.1.2, “Debug Type.”
16
4
SizeOfData
The size of the debug data (not including the debug directory itself).
20
4
AddressOfRawData
The address of the debug data when loaded, relative to the image base.
24
4
PointerToRawData
The file pointer to the debug data.

and it even provides an updated list of Debug Type definitions that I have updated in the code:

Constant
Value
Description
IMAGE_DEBUG_TYPE_UNKNOWN
  0
An unknown value that is ignored by all tools.
IMAGE_DEBUG_TYPE_COFF
  1
The COFF debug information (line numbers, symbol table, and string table). This type of debug information is also pointed to by fields in the file headers.
IMAGE_DEBUG_TYPE_CODEVIEW
  2
The Visual C++ debug information.
IMAGE_DEBUG_TYPE_FPO
  3
The frame pointer omission (FPO) information. This information tells the debugger how to interpret nonstandard stack frames, which use the EBP register for a purpose other than as a frame pointer.
IMAGE_DEBUG_TYPE_MISC
  4
The location of DBG file.
IMAGE_DEBUG_TYPE_EXCEPTION
  5
A copy of .pdata section.
IMAGE_DEBUG_TYPE_FIXUP
  6
Reserved.
IMAGE_DEBUG_TYPE_OMAP_TO_SRC
  7
The mapping from an RVA in image to an RVA in source image.
IMAGE_DEBUG_TYPE_OMAP_FROM_SRC
  8
The mapping from an RVA in source image to an RVA in image.
IMAGE_DEBUG_TYPE_BORLAND
  9
Reserved for Borland.
IMAGE_DEBUG_TYPE_RESERVED10
10
Reserved.
IMAGE_DEBUG_TYPE_CLSID
11
Reserved.
IMAGE_DEBUG_TYPE_REPRO
16
PE determinism or reproducibility.

However, I struck out when looking for a definition of the CodeView header.  I was able to find some example code on DebugInfo which helped significantly.  From it I found that the structure looks something like:

// CodeView RSDS debug information 
// (used when debug information is stored in a PDB 7.00 file) 
struct CV_INFO_PDB70 
{
 DWORD      CvSignature; 
 GUID       Signature;       // unique identifier 
 DWORD      Age;             // an always-incrementing value 
 BYTE       PdbFileName[1];  // zero terminated string with the name of the PDB file 
};

When I dig in I run into a couple of issues.  First, Guid is a fine C# class, but it won't deserialize directly.  Easy enough, PInvoke.Net has a Guid definition that I can leverage.  The next issue is with the null terminated string.  I can't use the same type of deserialization with a variable length, but the SizeOfData in all of the instances with debug information was the same value (0x11C) so I might be able to fake it enough to get data back.  Let's see.  Here is the code I came up with:

Notice that the size of the array is 0x104?  Wait, let me change that to decimal and see if it helps, 0x104 = 260.  Now, is that a familiar number to any of you Windows users?  You got it, it is the maximum length for a path (MAX_PATH).  That means that my 'kludge' is going to be fine for Windows, mainly.  I can't wait (but will) to try longer paths in other systems and via the command prompt and...and...

Ok, that will do it for now.  Next time we will look at the Metadata section and see what it has in store for us.

Keep Coding!

Saturday, May 6, 2017

Reading Structured Binary files in C#: Part 13

First, some errata.  There was an issue with the code from the previous article, I had incorrectly been subtracting the CLR RVA address from the Section Table RVA rather than the reverse.  That meant that my starting address for the section read was off, which made the data wrong.  That is what was causing the strange Strong Name Signature entry that I was seeing earlier.

What is super interesting to me is the chance to explain how my tests didn't catch this.  The reason is that I am doing exploratory testing into the unknown.  I am writing my tests to validate my findings, not test my findings for correctness.  This is the baseline for me as I don't have a dissected .exe to compare my findings with.  You, however, do.  You have this!  For me this is validation that it is worth doing and putting out in such a public way.  Nothing I have seen presents the information in this manner.  I have updated the GitHub project with the new tests and programs.  What is cool about using the GitHub Gist for my code is that I can update the Gist and it will be reflected in the article.

On to the next section!  The .text section has the following format:

Import Address Table
CLR Header
Strong Name Signature (Optional)
IL Code and Managed Structure Exception Handling Tables (Optional)
Debug Directory (Optional)
Metadata
Managed Resources (Optional)
Unmanaged Export Stubs (Optional)
VTFixup Table (Optional)
CLR Startup Stub

The assemblies are not signed, so the Strong Name Signature is not included.  I know from the code by looking in the CLR Header at the StrongNameSignature and seeing that it is 0x0 rather than the RVA of the strong name.

Next, I see if we have an IL Code and Managed Structure Exception Handling Table by checking the ExceptionTable from the OptionalHeaderDataDirectories.  In this case, all of the assemblies have 0x0 for the entry, so that optional table is not there.

The next optional entry is the Debug Directory, and we again look in the OptionalHeaderDataDirectories for an address.  About half of the assemblies contain data in the Debug entry, so I will pick one of those and dig into the value it contains.  From the .Net IL Assembler book I know that the entry contains RVA that points to a pointer to a CodeView style header that contains the path to the PDB.  I expect the header to be in the .text section, but let's add some code to see.

I looked up the format of the Debug Directory structure on MSDN and converted it to a structure:

We read it in the same way that we have been reading it, and we test it as before.  Check out the code on GitHub if you want to see the implementations.  Next I will pull the CodeView section and get the path to the PDB.  It may prove 'interesting' as the data contains a null terminated string.

Stay tuned.  Keep digging in!