in Random

OneDrive Folder Redirection for macOS

So you’re running a macOS shop, and using Office 365 for identity, email, and data storage. You want to decrease risk of data loss, manage the security of corporate data, and provide staff the ability to share files in a controlled manner.

You also want to make it super easy for staff to adopt.

Microsoft OneDrive has a super great feature (for Windows) called Known Folder Move. This redirects the Desktop, Documents, and Pictures folders to your OneDrive folder.

The generous Brian McFarlane has graciously provided the community with a package which configures KFM for macOS workstations. The package depends on ‘Outset’, which is a script that “automatically processes packages, profiles, and scripts during the boot sequence, user logins, or on demand.

Configuration is simple, with a sample .mobileconfig profile for the KFM script, and one to set Privacy Policy Preferences for python. These can be deployed manually, or with an MDM such as Jamf Pro.

The KFM script is triggered on user logon to perform the following:

  1. Check if OneDrive is running, and the OneDrive folder exists (user is signed into OneDrive)
  2. Check if the ‘Desktop’ and/or ‘Documents’ folders in user home are a symlink, and if so, end the script
  3. If they aren’t, then move ‘Desktop’ and ‘Documents’ into the OneDrive folder and create symlinks. If any conflicting files are found, they’re placed in the user’s ‘Desktop’ folder and the user is alerted.
  4. If configured, filenames in ‘Desktop’ are cleaned up (illegal chars, leading/trailing spaces, etc…)

This is great, and works amazingly well. I found no scenarios where any data loss was encountered. YMMV though, so don’t blame me if it sets fire to your cat.

I did have a problem with this though. After the ‘Desktop’ and ‘Documents’ folders were moved and linked, the Finder sidebar shortcuts stop working. I found the aptly named ‘mysides‘ which is a small CLI tool to modify Finder sidebar entries.

With mysides installed, the KFM script can be told to fix this for us. You can find the changes here: https://github.com/nmanzi/MacOS-OneDrive-KFM/blob/42057dc46680f8dfcfbe8d641e63dd2139088aef/payload/usr/local/outset/login-privileged-every/onedrive-kfm.sh#L134

I have a fork and release for the updated KFM which requires both Outset and mysides to be installed as prerequisites.

Download MacOS-OneDrive-KFM v0.2

Hope this helps!

Jamf Hacks #1 – AD Group scoped policies on unbound assets

In a rush? Find the script here. Create a new standard user in Jamf for the script to access the API, granting it Auditor privileges. Update the script with that username and password, then drop it into a new policy and scope it for all workstations.

Scenario:

  • Jamf has connection to ADDS for auth
  • Users enrol their own devices using their AD user/pass
  • Devices are not bound to AD, nor do local usernames match AD
  • Some policies are scoped for AD Groups
  • AD Group scoped policies are not being applied to users’ workstations

Reason:

Simple, really. When the Jamf agent evaluates the policies, it does so using the local username. If this doesn’t match an AD username, then AD group membership isn’t found, and scoped policies don’t apply.

The AD username of the person who owns (enrolled) the device is stored in the computer’s user and location properties in Jamf Server.

Example computer object in Jamf, showing User and Location properties.

We just need a way to tell the Jamf agent to apply policies for that username.

Solution:

Use the API! I was hunting around for an example and found this on Jamf Nation. The script offered by ‘ShaunRMiller83’ had almost exactly what I was looking for, that is, a bash scripted API call to pull the device user from Jamf Server.

The important part of Shaun’s script is below.

#!/bin/sh
# Variables
jssURL="https://jamf.domain.com:8443/"
apiUser="apiuser"
apiPass="apipassword"

SERIAL=$(ioreg -c IOPlatformExpertDevice -d 2 | awk -F\" '/IOPlatformSerialNumber/{print $(NF-1)}')

USERINFO=$(curl -k ${jssURL}JSSResource/computers/serialnumber/${SERIAL}/subset/location -H "Accept: application/xml" --user "${apiUser}:${apiPass}")
USERNAME=$(echo $USERINFO | /usr/bin/awk -F'<username>|</username>' '{print $2}' | tr [A-Z] [a-z]) # This is our asset's owner's username.

All we need to do was call jamf policy -username with the $USERNAME variable to have the Jamf agent pull down user or group scoped policies. The following script will do this.

#!/bin/sh
# Polls Jamf API for computer owner then requests
# all policies for that username

# Variables
jssURL="https://<YOURDOMAIN>.jamfcloud.com/"
apiUser="<YOURAPIUSER>"
apiPass="<APIUSERPASS>"

SERIAL=$(ioreg -c IOPlatformExpertDevice -d 2 | awk -F\" '/IOPlatformSerialNumber/{print $(NF-1)}')
USERINFO=$(curl -s -k ${jssURL}JSSResource/computers/serialnumber/${SERIAL}/subset/location -H "Accept: application/xml" --user "${apiUser}:${apiPass}")
USERNAME=$(echo $USERINFO | /usr/bin/awk -F'<username>|</username>' '{print $2}' | tr [A-Z] [a-z])

printf "%s %s\n" "Processing policy for user:" $USERNAME
/usr/local/jamf/bin/jamf policy -username $USERNAME

A prerequisite for the script is an API Username and Password combo for the script to use to authenticate with the Jamf API. Simply create a new standard user in Jamf, granting it Auditor privileges. Use the username and password in the script.

Uh-oh! If we just upload this script to Jamf and have it run in a policy, as jamf will complain that it’s already busy processing policy. So we’ll need to drop this script somewhere on the assets and configure a scheduled task to trigger it every so often.

Hidden commands in the jamf binary help us here, specifically the jamf scheduledTask verb. This will create a LaunchDaemon with a specified command, user, and schedule.

jamf scheduledTask -command "/path/to/our/script/getuserpolicy.sh" -name GetADUserPolicies -user root -runAtLoad -minute '*/30/'

We can wrap this all up in a single script that will:

  • Drop the ‘getuserpolicy.sh’ script on the system in /usr/local/jamf/bin
  • Make the script executable with a chmod +x
  • Run the jamf scheduledTask command to schedule script execution for every thirty minutes

Find the final version of this script below. Did this script help you? I’d love to hear about it! Comment below, or reach out to me on twitter.

Install a PowerShell .nupkg on an offline computer

The ability to find and install PowerShell modules from online sources like Nuget makes life for a Windows admin a smidge nicer. On the flipside, arbitrary trust of online package repositories and granting servers outbound internet access can be a nightmare for those tasked with protecting a network.

You might find yourself needing to install a PowerShell module (as a nupkg file) on a system with restricted (or no) internet access, as one of our security consultants found himself needing to do.

Here’s a quick guide on how to achieve this. If only it were as simple as an Install-Package .\module.nupkg!

Offline .nupkg installation

  1. Run Install-PackageProvider -Name NuGet -RequiredVersion 2.8.5.201 -Force to install the provider from a computer with an internet connection.
  2. After the install, you can find the provider installed in C:\Program Files\PackageManagement\ProviderAssemblies – copy the Nuget folder to external media or otherwise find a way to get it to your target system.
  3. Place the nuget folder in C:\Program Files\PackageManagement\ProviderAssemblies on your target computer.
  4. Start a new PowerShell session on the target computer to auto-load the package provider.
  5. Create a new folder in C:\ named Packages
  6. Copy your nupkg file(s) into C:\Packages
  7. In PowerShell run Register-PSRepository -Name Local -SourceLocation C:\Packages -InstallationPolicy Trusted
  8. You can list the packages available with Find-Module -Repository Local
  9. Run Install-Module -Name <YourModuleName> where <YourModuleName> is the name of your package as returned by the command in step 8.

I put this together with information from trebleCode and Nova Sys Eng in this StackOverflow thread. Thanks go out to those fine people.

Finding External Users in Horizon View

Hi internet! It’s been a while!

Thought it might be worthwhile sharing a short bit of SQL we used recently in an MFA deployment project.

The objective was to obtain a list of users that had been logging into a Horizon View 6 VDI deployment so that they could be targeted for MFA provisioning.

It appears this is quite a simple matter if the Events DB functionality is enabled. All that needs to be done is to select distinct for any ‘BROKER_USERLOGGEDIN’ entries where the ‘ClientIPAddress’¬†value matches something other than your internal IP ranges.

You can find the SQL to do this below. It’s been tested with Horizon 6 and 7. Enjoy!

How AWS does networking

Came across this video after reading a Reddit thread asking who really uses SDN. Couldn’t pass up the opportunity to share this excellent talk.

From the description:

In this session, we walk through the Amazon VPC network presentation and describe the problems we were trying to solve when we created it. Next, we walk through how these problems are traditionally solved, and why those solutions are not scalable, inexpensive, or secure enough for AWS. Finally, we provide an overview of the solution that we’ve implemented and discuss some of the unique mechanisms that we use to ensure customer isolation, get packets into and out of the network, and support new features like VPC endpoints.

Graylog Extractors for pfSense 2.2 filter logs

Hi all,

I’m trying out Graylog for log collection, aggregation, and analysis. It’s free and pretty damn easy to deploy, available in OVA format.

The first thing I noticed is there seemed to be no extractors for pfSense 2.2’s new log format. Extractors allow you to parse a syslog message and place certain values into ‘fields’ for analysis or use in graphs.

Here’s one I prepared relatively quickly. You can import by:

  1. Click System -> Inputs in the Graylog UI
  2. Click ‘Manage extractors’ next to the relevant input
  3. Click ‘Import extractors’ in the ‘Actions’ menu at the top right of the page
  4. Paste the below script into the window and then click ‘Add extractors to input’

The extractors will parse the following fields out of the pfSense 2.2 filterlog messages:

  • Rule number into pfsense_filter_rulenum
  • Direction into pfsense_filter_direction
  • Ingress interface into pfsense_filter_ingress
  • Action into pfsense_filter_action
  • Protocol into pfsense_filter_proto
  • Source IP into pfsense_filter_sourceip
  • Source Port into pfsense_filter_sourceport
  • Destination IP into pfsense_filter_destip
  • Destination Port into pfsense_filter_destport

Right now they only interpret IPv4 logs, IPv6 log entries don’t get parsed (thanks to the condition regex) as they are formatted differently.

The script is available here, or click ‘Continue Reading’.

Hope this helps!

Continue reading

NetApp & Powershell – Snapshot Report

This post follows on from my last, where I created a script to send an email report when running dedupe operations were detected.

Utilizing the same script, I made some quick modifications to have it send an email report of volume snapshots and their sizes/creation date. Here’s what it looks like.

2015-02-27 10_36_20-NetApp Volume Snapshot Report - Message (HTML)

The syntax for the report is pretty much the same as the last script.

.\NetApp-SnapshotReport.ps1 -Controller controller1,controller2 -Username <user> -Password <pass> -SMTPServer <server> -MailFrom <Email From> -MailTo <Email To>

Download the script here. I currently have it configured as a scheduled task running every morning so we have a daily report of current volume snapshots, and it works well.

Enjoy!

 

NetApp & Powershell – Report on running dedupe tasks

Hi all,

Recently ran across a misbehaving NetApp where it’s deduplication process would be triggered on a Saturday morning, and still be running come the Monday. It wouldn’t happen on every scheduled run, but when it did, it hurt storage performance significantly. We’re working on the usual tasks, there’s a lot of misaligned data on the volume. But in the meantime, I used the NetApp Data ONTAP PowerShell Module to create a little script that will shoot an email if it detects a running SIS process.

Configure it as a scheduled task on a system that has the DataONTAP powershell module installed. Here’s an example of the command line parameters:

.\NetApp-ActiveDedupeAlert.ps1 -Controller controller1,controller2 -Username <user> -Password <"pass"> -SMTPServer <server> -MailFrom <Email From> -MailTo <Email To>

If the script detects any running SIS processes, it’ll shoot off an email that looks like this:

Dedup Alert Email

You can grab the code here, or click the ‘Read More’ button to see the code.

Continue reading