in Random

Install a PowerShell .nupkg on an offline computer

The ability to find and install PowerShell modules from online sources like Nuget makes life for a Windows admin a smidge nicer. On the flipside, arbitrary trust of online package repositories and granting servers outbound internet access can be a nightmare for those tasked with protecting a network.

You might find yourself needing to install a PowerShell module (as a nupkg file) on a system with restricted (or no) internet access, as one of our security consultants found himself needing to do.

Here’s a quick guide on how to achieve this. If only it were as simple as an Install-Package .\module.nupkg!

Offline .nupkg installation

  1. Run Install-PackageProvider -Name NuGet -RequiredVersion 2.8.5.201 -Force to install the provider from a computer with an internet connection.
  2. After the install, you can find the provider installed in C:\Program Files\PackageManagement\ProviderAssemblies – copy the Nuget folder to external media or otherwise find a way to get it to your target system.
  3. Place the nuget folder in C:\Program Files\PackageManagement\ProviderAssemblies on your target computer.
  4. Start a new PowerShell session on the target computer to auto-load the package provider.
  5. Create a new folder in C:\ named Packages
  6. Copy your nupkg file(s) into C:\Packages
  7. In PowerShell run Register-PSRepository -Name Local -SourceLocation C:\Packages -InstallationPolicy Trusted
  8. You can list the packages available with Find-Module -Repository Local
  9. Run Install-Module -Name <YourModuleName> where <YourModuleName> is the name of your package as returned by the command in step 8.

I put this together with information from trebleCode and Nova Sys Eng in this StackOverflow thread. Thanks go out to those fine people.

Finding External Users in Horizon View

Hi internet! It’s been a while!

Thought it might be worthwhile sharing a short bit of SQL we used recently in an MFA deployment project.

The objective was to obtain a list of users that had been logging into a Horizon View 6 VDI deployment so that they could be targeted for MFA provisioning.

It appears this is quite a simple matter if the Events DB functionality is enabled. All that needs to be done is to select distinct for any ‘BROKER_USERLOGGEDIN’ entries where the ‘ClientIPAddress’¬†value matches something other than your internal IP ranges.

You can find the SQL to do this below. It’s been tested with Horizon 6 and 7. Enjoy!

How AWS does networking

Came across this video after reading a Reddit thread asking who really uses SDN. Couldn’t pass up the opportunity to share this excellent talk.

From the description:

In this session, we walk through the Amazon VPC network presentation and describe the problems we were trying to solve when we created it. Next, we walk through how these problems are traditionally solved, and why those solutions are not scalable, inexpensive, or secure enough for AWS. Finally, we provide an overview of the solution that we’ve implemented and discuss some of the unique mechanisms that we use to ensure customer isolation, get packets into and out of the network, and support new features like VPC endpoints.

Graylog Extractors for pfSense 2.2 filter logs

Hi all,

I’m trying out Graylog for log collection, aggregation, and analysis. It’s free and pretty damn easy to deploy, available in OVA format.

The first thing I noticed is there seemed to be no extractors for pfSense 2.2’s new log format. Extractors allow you to parse a syslog message and place certain values into ‘fields’ for analysis or use in graphs.

Here’s one I prepared relatively quickly. You can import by:

  1. Click System -> Inputs in the Graylog UI
  2. Click ‘Manage extractors’ next to the relevant input
  3. Click ‘Import extractors’ in the ‘Actions’ menu at the top right of the page
  4. Paste the below script into the window and then click ‘Add extractors to input’

The extractors will parse the following fields out of the pfSense 2.2 filterlog messages:

  • Rule number into pfsense_filter_rulenum
  • Direction into pfsense_filter_direction
  • Ingress interface into pfsense_filter_ingress
  • Action into pfsense_filter_action
  • Protocol into pfsense_filter_proto
  • Source IP into pfsense_filter_sourceip
  • Source Port into pfsense_filter_sourceport
  • Destination IP into pfsense_filter_destip
  • Destination Port into pfsense_filter_destport

Right now they only interpret IPv4 logs, IPv6 log entries don’t get parsed (thanks to the condition regex) as they are formatted differently.

The script is available here, or click ‘Continue Reading’.

Hope this helps!

Continue reading

NetApp & Powershell – Snapshot Report

This post follows on from my last, where I created a script to send an email report when running dedupe operations were detected.

Utilizing the same script, I made some quick modifications to have it send an email report of volume snapshots and their sizes/creation date. Here’s what it looks like.

2015-02-27 10_36_20-NetApp Volume Snapshot Report - Message (HTML)

The syntax for the report is pretty much the same as the last script.

.\NetApp-SnapshotReport.ps1 -Controller controller1,controller2 -Username <user> -Password <pass> -SMTPServer <server> -MailFrom <Email From> -MailTo <Email To>

Download the script here. I currently have it configured as a scheduled task running every morning so we have a daily report of current volume snapshots, and it works well.

Enjoy!

 

NetApp & Powershell – Report on running dedupe tasks

Hi all,

Recently ran across a misbehaving NetApp where it’s deduplication process would be triggered on a Saturday morning, and still be running come the Monday. It wouldn’t happen on every scheduled run, but when it did, it hurt storage performance significantly. We’re working on the usual tasks, there’s a lot of misaligned data on the volume. But in the meantime, I used the NetApp Data ONTAP PowerShell Module to create a little script that will shoot an email if it detects a running SIS process.

Configure it as a scheduled task on a system that has the DataONTAP powershell module installed. Here’s an example of the command line parameters:

.\NetApp-ActiveDedupeAlert.ps1 -Controller controller1,controller2 -Username <user> -Password <"pass"> -SMTPServer <server> -MailFrom <Email From> -MailTo <Email To>

If the script detects any running SIS processes, it’ll shoot off an email that looks like this:

Dedup Alert Email

You can grab the code here, or click the ‘Read More’ button to see the code.

Continue reading

SCVMM Error 2912 when creating a new VM

Hey all!

Had this error occur in my Hyper-V lab after playing with WinRM GPOs. After removing the GPO because SCVMM is a finicky pain – I was still receiving this error when attempting to create a new VM:

Error (2912)
An internal error has occurred trying to contact the lab-hyperv1.lab.int server: : .
WinRM: URL: [http://lab-hyperv1.lab.int:5985], Verb: [INVOKE], Method: [CreateDirectory], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/FileInformation]
Invalid Signature (0x80090006)
Recommended Action
Check that WS-Management service is installed and running on server lab-hyperv1.lab.int. For more information use the command "winrm helpmsg hresult". If lab-hyperv1.lab.int is a host/library/update server or a PXE server role then ensure that VMM agent is installed and running. Refer to http://support.microsoft.com/kb/2742275 for more details.

Here’s how to fix this:

  1. Remove the affected Hyper-V hosts from SCVMM
  2. Open Certificate Management (Computer) and make sure you don’t see certificates for the Hyper-V hosts in the ‘Trusted People’ store
  3. Re-add the Hyper-V hosts

You should now be able to create new VMs.

Tintri 3.1.1.4 and SCVMM integration – Access Denied

With the release of Tintri OS 3.1.1.4, Tintri introduced their support for SCVMM and Hyper-V integration.

After upgrading to the new release, you’ll be able to add Hyper-V hosts into the Tintri UI settings screen – allowing the Tintri to grab details of running VMs. A SMI-S interface is also available after the upgrade, that you can use with SCVMM to create and manage SMB3 file shares, including setting quotas and applying a storage classification.

As with all new feature releases in the history of everything, there’s some problems. You might find – after configuring your SCVMM install and Hyper-V hosts according to the documentation (available on the support site) – that your hosts don’t have access to the share. You’ll be unable to create new VMs on the new share, or even remove the share through SCVMM.

Here’s how to avoid these problems and get those file shares working.

  1. In the Tintri UI, open ‘Settings’, and open the ‘Management Access’ panel
  2. Remove the entry for your SMI-S Run-As Account group (the one you created while following through the documentation)
  3. Save the settings, and then re-open the ‘Settings’ window
  4. Go back to the ‘Management Access’ panel, and re-add the SMI-S Run-As Account group with Super admin access
  5. Save settings
  6. On a host that has access to your Tintri’s data IP, while logged in as a domain user that has Super admin privileges on the Tintri, open a command prompt or powershell window
  7. Enter the following command, substituting for your share path and Hyper-V host group name:
    2015-01-22 18_26_28-192.168.0.10 - Remote Desktop Connection
  8. Go ahead and try to create a new VM, it should work

Tintri are aware of this and there’s apparently an internal bug ticket for it – hopefully it’s resolved in the next code release.

Hope this helps.