This sidebar can be used to jump to sections within the page, or other sub pages of the active tab.

Using this approach, you shouldn't need breadcrumbs since you've provided your visitor with a visual cue to where they are in the site.

Script: Set correct permissions on macosx server user home folders

One of my routine sysadmin tasks involves correcting Unix permissions and Darwin ACLs on the user home folders on our macosx file server. Sometimes things happen and users alter the permissions on various items in their home folder, giving (or restricting) access to the wrong people. Or, your friendly junior sysadmin copies home folders around and they end up being owned by him or her, instead of the user they should belong to. Or you restore from a backup that does not restore the permissions. You get the picture.

Over the years I have added new features and made a variety of improvements to the script that I run to set (or re-set) the permissions on our user home folders. I now feel that the script is robust (and customizable) enough that I ought to share it others who might want to use it too.

The script is actually two scripts that work in tandem. The first script is called FixUser.sh, and it sets the correct Unix permissions and Darwin ACLs on a home folder (and subfolders) for a specific user.

The script must be run as root, and it can be placed anywhere you like. I keep mine in the directory that contains the users’ home folders. Of course the script will need to both readable and executable; if you’re going to keep it in a place where other people can see it, make it owned by root and disallow access to others. (e.g. chown root:wheel FixUser.sh; chmod ug+rwx,o-wx FixUser.sh).

Usage is as follows:

./FixUser.sh username

The username parameter should be the primary shortname for a user on your system, and the home folder for that user needs to have the same name.

Two items of importance are found on lines 2 and 3 of the script. These are variables that need to be set to the full (absolute) path to the folder that contains your users’ home folders, and the folder that contains your users’ Windows profiles (if you’re also running samba, with roaming profiles). PLEASE EDIT THESE VARIABLES BEFORE RUNNING.

Here’s the script:


#! /bin/bash HomeDir=/NetUsers ProfileDir=/NetProfiles cd $HomeDir User=$1 if id -u $User 1> /dev/null; then if [ -d $User ]; then if [ -x $User/.ApplyPrePerms.sh ]; then echo “ Running pre-permissions task…” cd $User su -f -m $User ./.ApplyPrePerms.sh cd $HomeDir echo “ Done.” fi echo Processing Home Directory for $User chflags -R nouchg $User chmod -R -N $User chown -R $User:staff $User chmod -R u+rwX,go-rwx $User chmod go+rX $User chmod -R go+rX $User’/Sites’ $User’/Public’ chmod -R go-r+wX $User’/Public/Drop Box’ chmod +a “$User allow list,add_file,search,delete,add_subdirectory,delete_child,read,write,append,execute,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit” $User’/Public/Drop Box’ if [ -x $User/.ApplyPostPerms.sh ]; then echo “ Running post-permissions task…” cd $User su -f -m $User ./.ApplyPostPerms.sh cd $HomeDir echo “ Done.” fi fi cd $ProfileDir if [ -d $User ]; then if [ -x $User/.ApplyPrePerms.sh ]; then echo “ Running pre-permissions task…” cd $User su -f -m $User ./.ApplyPrePerms.sh cd $ProfileDir echo “ Done.” fi echo Processing Profile for $User chflags -R nouchg $User chmod -R -N $User chown -R $User:staff $User chmod -R u+rwX,go-rwx $User if [ -x $User/.ApplyPostPerms.sh ]; then echo “ Running post-permissions task…” cd $User su -f -m $User ./.ApplyPostPerms.sh cd $ProfileDir echo “ Done.” fi fi fi

As you can see, it does a bit of sanity checking. Feel free to alter the permission setting and clearing as you see fit; this is how I like it on my macosx server, and I think the resulting permissions are pretty much factory-default, or slightly improved. The ACL on the Drop Box is really a nice touch.

If you don’t use samba or windows roaming profiles, you can remove the whole cd $ProfileDir and following if block.

A single line is printed for each user home folder and profile folder that is processed. The script tries not to display any unimportant error or warning text, but warnings or errors that deserve your attention should be printed.

One final and very BIG feature is the ability to “augment” the permissions for specific users. You will notice that the script checks for (and calls) additional, optional, scripts inside the user’s home folder, called .ApplyPostPerms.sh .ApplyPostPerms.sh. These scripts are run as the user (not root) for safety, and they allow you to modify or augment the specific resulting permissions on a per-user basis. (Or they can be used to allow the user to customize their own permissions to their liking.) The point is, these files can be permanently stored in various home folders that need customization, without having to remember or figure it all out the next time you decide to reset user permissions en masse.

Which brings us to our next script, FixUsers.sh (plural).

This second script is much smaller and simply enumerates all the items in the directory that contains your user home folders. Then, for each item, it runs FixUser.sh (singular), which actually sets the permissions for that specific user’s home folder.

The item of greatest importance in FixUsers.sh is line 2, where we set the variable WorkDir to the full path to the directory containing your users’ home folders. You can keep this scripts anywhere you like, but it needs to be together in the same folder with FixUser.sh, and will also need to run as root in order to work. If you decide to keep this in a place where other people might access the script, please remember to set some reasonable permissions on it (e.g. chown root:wheel FixUsers.sh; chmod ug+rwx,o-wx FixUsers.sh).


#! /bin/bash WorkDir=/NetUsers cd $WorkDir for User in * do if [ -d $User ]; then ./FixUser.sh “$User” fi done

I hope you find these useful. I sure do.

If you have any suggestions for improvement, please leave a comment!

PayPal tips are welcome at kevpatt@khptech.com :)

Posted by Kevin H. Patterson - 1688 days ago.
Posted in .

A Note About My Computing Background

Here’s a bit about my computing background and why I currently use a Mac as my primary development platform, in case anyone finds it interesting.

My first computer was a TI-99/4A. I taught myself BASIC on this machine in the late ’80s. (Designed in 1979, the 16-bit TI-99 was far ahead of its time, but poor management and marketing at Texas Instruments killed their computer division.)

Most of my elementary school classrooms had TI computers. Most of my secondary classrooms had pre-Mac Apple computers. In high school we had a computer lab consisting of IBM PS/2 machines.

My second computer was an IBM PS/2 with an 8086 CPU running at 8 MHz. I went much deeper into BASIC on this machine, and eventually purchased Microsoft QuickBasic in order to compile applications. I also taught myself x86 and x87 assembly language. Many of my programs really pushed the limits of this machine’s graphics and sound capabilities.

I eventually got a Pentium machine running Windows 95. I moved from BASIC to Visual Basic, and expanded my assembly language skills to include 32-bit protected mode. I learned the Windows API, ActiveX, DirectX, OpenGL, and TCP/IP.

During this period I also worked for a fortune 500 corporation doing network administration, infrastructure maintenance, and desktop support. This environment involved a good number of windows servers, Oracle, Netware, as well as an IBM AS/400 with both native and Windows-based terminals running Rumba. Most of their workstations were running Windows 95 or NT at the time. Desktops ran Microsoft Office. The entire accounting / inventory system ran on the AS/400 (AIX Unix). There were a number of industrial control systems running embedded software (PLCs, Ladder Logic, etc.)

It was around 2000 that I discovered the amazing but ill-fated BeOS. Here was clean, lightweight yet powerful platform based on a beautiful C++ API with POSIX support. This really opened my eyes to the layers of bloat and legacy baggage propping up Windows as an OS. I learned C++, wrote some apps, and enjoyed a small, vibrant, friendly, and amazingly open developer community. From a developer’s standpoint, it was like walking out of a cave into a bright, beautiful paradise.

The eventual demise of Be, Inc. taught me something interesting. Windows wasn’t dominant because it had a better design or better technology. It was the sheer level of entrenchment coupled with heavy-handed anti-competitive tactics that Microsoft relied on to keep Windows #1. The blinders were off, and I knew that better possibilities existed.

Many of those who started Be, Inc. had come from Apple Computer. It just so happened that Apple was making the transition to OS X around the same time that Be went out of business. Some Be’s engineers even went back.

I continued to explore different OSes. I played around with some small projects, as well as different flavors of Linux. Gentoo was a favorite. I also discovered FreeBSD, imho the best example of the “real thing” in servers (Linux = BSD + Hype (and GPL fanaticism)).

Around the same time, I also took a job where I was made responsible for managing a lab of computers running Mac OS X. I was new to Mac OS X as a domain controller, and it was a bit of a learning curve. But as they say, “no pain, no gain.” I bit the bullet and was rewarded immensely. What a discovered was a much more refined and integrated network administration experience. I also got to experience the beauty of MacOS X on the desktop from day-to-day. The UNIX underpinning was the icing on the cake.

Today I manage a campus network environment with both Windows and Mac servers, a Mac-based computer lab, and both Mac and Windows desktop support across different departments. Support issues between Windows and Mac are like night and day for the most part. There are at least two other network administrators for PC support, while I manage the Mac support by myself. This has had an influence, to the point that Mac has arguably become the computer of choice for the majority of students and many staff as well.

The Mac platform has proven itself to be worth the somewhat higher initial cost in hardware. The general lack of issues and top-to-bottom integration has won a lot of people over. The Mac platform has also proven to be advantageous when it comes to systems integration, leveraging open standards and cross-platform technologies. This has allowed us to develop and deploy a custom network access control system running on FreeBSD, completely integrated with our Mac-based user management. All of our core network services run on Mac OS X server, with the exception of SMB and Active Directory on a Windows server (solely for the Windows clients).

I highly recommend checking out the two links I posted earlier. I see a bright future ahead for the Mac platform. The Mac “ecosystem” is simply a cleaner, more fertile environment, which offers a better overall user experience in many cases. Switching to Mac is simply a matter of significant exposure and a willingness to try something different; it isn’t always easy, but it is well worth the effort.

Posted by Kevin H. Patterson - 2211 days ago.
Posted in .

Worldwide Earthquake Statistics

Ok, time for a little Science™.

Recently I was intrigued by some reports on the Internet showing drastically increased earthquake activity in recent years. The data claims to be sourced from the USGS website. So naturally, I decided to download the data and analyze it for myself.

After all that work, I decided I should post something here for anyone else interested.

First off, full disclosure of my sources:

My data comes from the USGS, from two datasets. The first dataset is called the “Centennial Earthquake Catalog”, and it can be found here. It does not include any earthquakes less than M 5.5, and extends from 1900 to April 2002. For reasons mentioned below, I only looked at data from 1973 onward.

The second dataset is called the “USGS/NEIC (PDE)”, and a query tool can be found here. This has a much larger set of entries for the time period we are considering. The NEIC dataset begins at 1973, which is why I only use the Centennial dataset from 1973 onward.

A word about the data

The Centennial dataset is by far the smaller of the two datasets for the time period I am considering. During any particular time period, it appears to have only a small fraction of the entries that the NEIC dataset would have during the same time period, even for M 6.0 and greater. It also terminates in April 2002. I believe this dataset represents earthquake statistics gathered by the USGS from published sources. You can see that it often correlates many different reports on the same quake, if you look at the data.

The NEIC dataset is much denser, and as far as I can tell it is made of up data that is constantly being fed from seismographs worldwide. Thus, I would expect this dataset to have a definite increase in density over time, as earthquake reporting improves and more seismographs are installed. Someone please correct me if I’m wrong.

I downloaded the entire Centennial dataset, and queried the NEIC dataset for the years 1973-2010, M 3.0 or greater. All data was converted to a common csv format and imported into a MySQL database. There are 8333 entries from the Centennial dataset, and 448624 entries from the NEIC dataset.

Frequency Graphs

I produced a number of different graphs to help me analyze the data. First, I wanted to see to total number of earthquakes per year.

USGS NEIC Earthquakes M ≥ 3 Per Year

This graph shows the number of earthquakes per year, M 3.0 or greater, categorized by magnitude range, from the NEIC dataset.

Here is the data from the Centennial dataset. Notice that the data only shows quakes M 5.5 or greater:

USGS Centennial Earthquakes M ≥ 5.5 Per Year

I made another graph of the NEIC data showing only quakes M 5.0 or greater for comparison:

USGS NEIC Earthquakes M ≥ 5 Per Year

Force Graphs

After looking at this data for a while, I started to think “lots of small earthquakes may not be as significant as a few large ones”. This is because the Richter scale is a logarithmic scale. An increase of 1 unit on the Richter scale equals 10x the ground motion (displacement). And furthermore, increase of 1 unit on the Richter scale equals about 32x the destructive force (energy). See here.

So I made some new graphs showing total magnitude, total displacement, and total energy of all earthquakes in a given year.

Now, the “total destructive force” for all earthquakes in a single year is a very large, absolutely meaningless number. As is total ground motion. As is “total” magnitude on the Richter scale. It would be hard to relate these numbers to anything, let alone put them on the same graph. So here’s what I did:

I found the maximum for all 3 sets, and then graphed each set as a percentage of the maximum. This is why I am calling them “normalized” values on the graphs below. In this way, you can easily see the overall trends as well as the relationship between these 3 values. Just keep in mind that there is no “absolute” relationship between the lines, so if one goes above or below another that doesn’t mean anything. What is important is how they track each other following the trends.

Ok, enough of that. If you want the formulas for how I calculated the numbers, they are below the first graph.

First, the NEIC data:

USGS NEIC Earthquakes M ≥ 3 Normalized Total Forces

The blue line shows total magnitude (sum of magnitudes) for all quakes during a given year. (Probably a meaningless metric)

The green line shows total displacement (movement of the earth) for all quakes during a given year. This is calculated as ∑(10M) for each year.

The yellow line shows total energy (destructive power) for all quakes during a given year. This is calculated as ∑((101.5)M) for each year. This is probably the most important metric.

Now the Centennial data:

USGS Centennial Earthquakes M ≥ 5.5 Normalized Total Forces

Finally, the NEIC data restricted to M 5.0 and above, for better comparison with the Centennial data:

USGS NEIC Earthquakes M ≥ 5 Normalized Total Forces


As you can see, there is a lot a variation in the graphs, but there is also an overall constancy. I will refrain from any special interpretation here.

You can definitely see a slightly upward linear trend in the NEIC data, especially when we include quakes below M 5.0. This is probably due to the constant improvement in detection and reporting worldwide, which mostly results in a greater number of recorded small-magnitude quakes. This conclusion is also supported by the USGS’ own explanation.

It is much harder to see a trend in the Centennial dataset. It is a much more sparse dataset and covers a narrow range of magnitudes over a shorter time. Overall the linear trend here seems almost flat.

When we look at the force graphs, here again we can see a slight upward linear trend, especially in the NEIC dataset. The interesting thing to note here is that this trend is still visible even when we restrict ourselves to M 5.0 and higher. Why?

Probably somewhat due to more and better reporting, but if you compare the frequency and force graphs, you will notice that there have been a few more large earthquakes in the last 10 years or so, and because of the math, it makes a big difference.

The most significant real-world impact is probably represented by the yellow “energy” line, representing total destructive force. It peaked in 2004, almost entirely as a result of the Asian Tsunami quake of that year.

2008 had an unusually high number of small-magnitude earthquakes. This gives you peak magnitude and displacement totals in 2008, but yields only an average total destructive energy. 2008 is still the most unusual year in the data, IMHO.

The Centennial graphs show peaks in 1995 and 2000-2001. These look really big but keep in mind they are just normalized values. They correspond to the moderate bumps in 1995 and 2000-2001 on the NEIC graphs.

Also notice that the last datapoint on all the graphs is lower than it should be; the totals for 2002 (Centennial) and 2009 (NEIC) are incomplete. The Centennial data ended on April 1, so you could roughly estimate the correct value by multiplying by 4. I believe The NEIC data is current through December 1, so it’s about 92% there.

If anyone wants a copy of my dataset, shoot me an email.

Posted by Kevin H. Patterson - 2516 days ago.
Posted in .
Comment [2]