Common DTE API Operations

Jan 26, 2011 at 2:16 AM
Edited Jan 26, 2011 at 2:16 AM

Hi all,

I've been tasked with compiling a list of common operations on the Visual Studio object model that one might perform in a package, presumably through install.ps1 or uninstall.ps1. Such a list might start like:

  • Get a list of Assembly references in a project
  • Add an Assembly as a reference to a project
  • Remove an Assembly reference from a project
  • Get a list of project items
  • Add an existing or new item to a project
  • Remove an existing item from a project
  • Apply a transformation to a project file
  • Unload a project
  • Reload a project
  • Close a project
  • Add a new project
  • Remove a project

This is obviously not an exhaustive list. I'll see also if I can take a look through the existing packages in our repository and see what sort of things people are doing.

The issue tracking this lies at http://nuget.codeplex.com/workitem/310

I envisage these commands as being available at runtime in the NuGet Project Console so that users building package installers can experiment easily at runtime to get the steps correct. They would be integrated with the existing NuGet commands (Get-Project, Get-Package etc.)

Please post suggestions here so I can triage. Thanks!

Jan 26, 2011 at 2:31 AM
  1. I would like to see the ability to create a solution when one does not exist. This will enable a ton of automation scenerios which are not currently possible. Right now nuget is kind of useless until you have a single project and solution. Maybe you can take care of the solution as part of the add-project, if you add-project when there is no solution open in the IDE.
  2. Add project reference from one project to another.
Jan 26, 2011 at 2:45 AM
oisin wrote:

I envisage these commands as being available at runtime in the NuGet Project Console so that users building package installers can experiment easily at runtime to get the steps correct. They would be integrated with the existing NuGet commands (Get-Project, Get-Package etc.)

This is fine when the helper library is small, but as we add more helper functions to it, I'm concerned that loading all of them up front will cause a perf hit. Maybe we should bundle it into a module, and have the init.ps1/install.ps1 load them explicitly? This is similar to loading a helper assembly in .NET.

Jan 26, 2011 at 2:47 AM

Add an msbuild targets file as an import (tricky as ordering might be important, i.e. after c# targets import)

/kzu from galaxy tab

On Jan 25, 2011 11:16 PM, "oisin" <notifications@codeplex.com> wrote:
> From: oisin
>
> Hi all,I've been tasked with compiling a list of common operations on the Visual Studio object model that one might perform in a package, presumably through install.ps1 or uninstall.ps1. Such a list might start like:Get a list of Assembly references in a project Add an Assembly as a reference to a project Remove an Assembly reference from a project Get a list of project items Add an existing or new item to a project Remove an existing item from a project Apply a transformation to a project file Unload a project Reload a project Close a project Add a new project Remove a project This is obviously not an exhaustive list. I'll see also if I can take a look through the existing packages in our repository and see what sort of things people are doing.The issue tracking this lies at http://nuget.codeplex.com/workitem/310I envisage these commands as being available at runtime in the NuGet Project Console so that users building package installers can experiment easily at runtime to get the steps correct. They should be integrated the existing NuGet commands (Get-Project, Get-Package etc.)Please post suggestions here so I can triage. Thanks!
>
>
Jan 26, 2011 at 2:56 AM
Edited Jan 26, 2011 at 2:59 AM
dotnetjunky wrote:
oisin wrote:

I envisage these commands as being available at runtime in the NuGet Project Console so that users building package installers can experiment easily at runtime to get the steps correct. They would be integrated with the existing NuGet commands (Get-Project, Get-Package etc.)

This is fine when the helper library is small, but as we add more helper functions to it, I'm concerned that loading all of them up front will cause a perf hit. Maybe we should bundle it into a module, and have the init.ps1/install.ps1 load them explicitly? This is similar to loading a helper assembly in .NET.

 Sure, on-demand loading is fine of a DTE module. All I meant is that it should be accessible for users and not just internal to packages. By integrated, I meant they are pipeline-compatible; not neccessarily in the same module.

Jan 28, 2011 at 8:32 PM

As David Fowler suggested, we could take a shortcut and base this code on what's already in the T4Scaffolding. It contains cmdlets implemented in C# (with pretty good test coverage, in most cases):

  • Get-PluralizedWord / Get-SingularizedWord
  • Get-ProjectAspNetMvcVersion
  • Get-ProjectFolder
  • Get-ProjectItem
  • Get-ProjectLanguage (i.e., c# or vb)
  • Get-ProjectType (locates a type in the specified project, or another project it references. Note: it returns a CodeType instance from the VS automation API, not a .NET System.Type instance, because your source code is not necessarily compiled yet. You can use the CodeType instance to add/remove members like methods and fields without affecting the rest of the source code). Possibly this should be renamed to Get-CodeType to avoid suggesting that it returns the type of a project.
  • Set-IsCheckedOut

The other cmdlets it contains are specific to scaffolding, so probably not relevant to this feature request.

Steve

Jan 28, 2011 at 8:54 PM
Edited Jan 28, 2011 at 10:25 PM

Thanks Steve.

At this point, I'm not planning on implementing the DTE module as binary cmdlets. Frankly, the perceived benefits of compiled code for working with the DTE are far outweighed by the flexibility of PowerShell's COM adapter. I intend to keep the commands extremely task oriented, very composable and above all, simple. By sticking with functions, we do not lose testability as writing a test harness for a function is identical to writing one for a binary cmdlet. The testing should be focusing on the external observable behaviour (inputs, outputs, side effects) of the command, and as such writing them in C# only complicates matters unneccessarily in my opinion.  PowerShell v2 functions have parity with v2 binary cmdlets in terms of functionality, so nothing is lost.

This is not an opinion I have formed lightly, but instead is formed from in-the-field experience of writing command suites wrapping COM APIs over the last five-plus years. The only time writing binary cmdlets for COM APIs is worth the effort is when the objects returned and passed are pure POCOs. Due to the aforementioned COM adapter, typing cmdlet parameters as interop types (e.g. public EnvDTE.Project MyParameter { get; set; }) is fruitless as the adapter obscures their managed identity, excluding them from the parameterset / positional parameter disambiguation algorithms.

If I'm going to manage this module, this is the approach I am would like to take taking. If we insist on writing in C# just to keep the rest of the team comfortable then I'm afraid we're making the wrong decision, and IMHO, is a bit shortsighted. Parts of the API would be managed code (like enumerating the GAC for example) where appropriate, but the commands for the most part will be implemented in script.

-Oisin

Update: lessen the imperative tone - this is a discussion after all.

Update 2: I'm sure there's code there that could be leveraged for the framework nonetheless - thanks for the pointer.

Jan 29, 2011 at 12:39 AM

Hi Oisin

I'm sure that's fine - whatever way you think is best. I can keep the existing C# cmdlets in T4Scaffolding and use them independently as long as you don't create cmdlets with clashing names.

Just one thing to bring to your attention in advance: as far as I know right now, when you run DTE operations in NuGet 1.1, because it now runs in async mode off the UI thread, the DTE operations are very slow. Scanning a deep project hierarchy can take seconds. The current known resolution is to run the operation on the VS UI thread, which is straightforwards to do in C#, maybe slightly trickier in PowerShell but probably possible. See http://nuget.codeplex.com/workitem/589 for more info. And if you come up with some other clever workaround, please let me know, because I'd love to eliminate the thread switching!

Cheers

Steve

Jan 29, 2011 at 2:24 AM

Hey Steve, thanks for the note about that DTE issue.

Btw, a better solution - and one I recommend to everyone - is to module-qualify your commands to prevent clashes (actually they are no more clashes in v2 - the last loaded module wins.) This future proofs them against not just a clash from the NuGet module, but also that from a peer module that may also be in scope. It works like this:

# my module manifest: t4scaffold.psd1
ModuleToProcess = @(t4scaffold.powershell.commands.dll)
...

# my init.ps1 script
import-module (join-path $psscriptroot t4scaffold.psd1)
$projectitem = t4scaffold\get-projectitem foo

It's as simple as putting the module name (typically the stem of the psd1 manifest, or the psm1 script module if you're loading the binary module as a nested module)  in front of the command with a backslash. This prevents any disambiguation problems in the future, espcially if you name your module after your package ID (which should be unique in the repo.)

Feb 15, 2011 at 9:42 PM

WANT:

Some content that one of my packages installs must be marked a certain way (Content) in order for it to be useful. It would be nice to automate this post-install to change the Content Type attribute for those files...or specify the type in config so that on install they are added appropriately.