Subscribe: My own personal soapbox!
http://www.blogger.com/feeds/5311667717377214125/posts/default/-/mozilla
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
accounts  attribute  cases  cluster  found  heartbeat  iscsi  ldap  make  new  nodes  password  resources  root server  server  service 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: My own personal soapbox!

My own personal soapbox!



Like it says.. I ramble and you listen :-)



Updated: 2018-03-02T09:25:08.768-08:00

 



Moving on from Mozilla..

2011-02-21T11:07:18.932-08:00

After about five and a half years at MoCo, I have decided to move on to a new position at a startup in San Francisco. My last day at MoCo will be Feb 28'th. It was a hard decision for me, especially looking at everything I had a chance to work on and help build over the years, but I feel it's the right one. Working at MoCo has been one of the most challenging and satisfying jobs I have ever had and a huge thanks to all of the folks at Mozilla for making it that. I will still be living and working in the bay area, so I do hope to keep in touch with folks here.

In my new role, I will be working as an operations engineer with a focus on developing tools to better understand and monitor Hadoop environments (at least initially). The goal would be to open source these tools and contribute them to the community.

Hope to see you on the other side..



shelldap to the rescue!

2010-04-07T17:30:10.072-07:00

I just discovered shelldap through my trusty dselect.. (I know, I am old and lazy and not in touch with the times, I really should be using apt-cache, but wth!). Anyhoo... shelldap is a pseudo shell on top of a LDAP DIT. You can cd to the different branches, grep within them for entries and edit individual entries in an LDIF format with your favorite editor!

Other tools like phpldapadmin, ldapsearch have their uses, but this is the most usable ldap browsing, editing tool I found so far. Figured someone else out there might find a use for it.. Thanks Mahlon E. Smith for shelldap!



A two node HA cluster - mini howto

2009-09-07T16:31:24.575-07:00

One of our goals this quarter has been to make our LDAP service more reliable. We tried using the Cisco ACE load balancer in front of two LDAP slaves, but that doesn't allow for custom application checks. Simple port checks aren't good enough for this and we needed a more thorough check to verify that our OpenLDAP instances were up and working correctly. So we decided to implement this in software using the linux HA stack. The linux HA stack allows you to combine a few servers into a cluster to provide highly available services(s). In HA terminology the services provided by the cluster are called resources.The HA stack is made of multiple components that work together to make resources available. The first of these is the heartbeat daemon. It runs on every single node (server) in the cluster and is responsible for ensuring that the nodes are alive and talking to each other. It also provides a framework for the other layers in the stack. Although there are bunch of other options you could use, a basic configuration tells heartbeat about the members in the cluster, establishes a communication mechanism between the members, and sets up an (secret) auth key to make sure that only nodes that know that key can join the cluster. Here is a sample config file for heartbeat.[root@server1 ha.d]# cat /etc/ha.d/ha.cf debugfile /var/log/ha-debuglogfile /var/log/ha-loglogfacility local0deadtime 30keepalive 1warntime 10initdead 120udpport 694bcast bond0mcast bond0 239.0.0.1 694 1 0auto_failback onnode server1node server2debug 0crm on[root@server1 ha.d]# [root@server1 ha.d]# cat /etc/ha.d/authkeysauth 22 sha1 4BWtvO7NOO6PPnFX[root@server1 ha.d]# With the above configuration, we are establishing two modes of communication between the cluster members (server1 and server2), broadcast or multicast over the bond0 interface. Other communication methods are possible as well (serial cable, etc). In this case, since both modes are going over the same interface, its probably redundant and not all that fool-proof. The authkeys file establishes the secret key that nodes need to know to join this cluster.Heartbeat by itself can also be used to manage and make the cluster resources available. However, it is limited to only two nodes in this configuration. A newer implementation was developed to remove this limitation and was spun off to become the pacemaker project. The last line "crm on" tells heartbeat that we will use an external Cluster Resource Manager (pacemaker in this case) to handle resources. Please note that there is a new software layer called OpenAIS that provides services similar to heartbeat. It is being developed jointly by RedHat and Suse and attempts to be a OSI certified implementation of the Application Interface Specification (AIS). I found it pretty confusing and decided to stick with heartbeat for our needs.Pacemaker can be used to provide a variety of services and is frequently used to provide resources that access shared data. A common example is an nfs server that exports data from a shared block level layer (like a iscsi disk). Scenarios like this require that only one host in the cluster accesses this shared disk at any time. Bad things happen when multiple hosts try to write to a single shared physical disk simultaneously. In certain situations, member nodes in a cluster fail to relinquish these shared resources and must be cut off from the resources. Heartbeat relies on a service called stonith (Shoot The Other Node In The Head), which basically turns misbehaving hosts off in such cases. This service is usually hooked up to some sort of remote power management facility for the nodes in the cluster. Our situation doesn't need that stuff, so my configuration does not cover stonith. Disable stonith with "crm_attribute --type crm_config -n stonith-enabled -v false".The pacemaker project provides binaries for almost all linux distributions (using the openSUSE Build Service - thanks guys!). Configuring pacemaker can seem daunting at fi[...]



LDAP password policy - how I learned to stop worrying..

2009-05-30T20:36:15.284-07:00

I have had to live through implementing a controversial password rotation policy at Mozilla. We had some bitter battles along the lines of "you dimwits, it doesn't do anything for security", and "you are a Nazi for trying to make me change my password so often". While I am still firmly in the "it's a good thing" camp, we have found OpenLDAP's support for password policy somewhat lacking. In particular, it does not distinguish between a few brain dead applications failing multiple times with a single incorrect password and a crack attempt with different incorrect passwords. I tried bringing it up in their mailing lists, but that thread didn't get too far. We have an its request on file as well. At this point, I wimped out and outsourced the problem. Enter Zytrax and the awesome Ron Aitchison, I can't recommend his "OpenLDAP - here is what all this gobbledegook means" guide enough. Jeff Clowser, Ron and I fleshed out the details in the spec and I am now happy to announce that we have patches that work against 2.4.11 and 2.4.16. It's been running on our servers for a few days now and seems to be holding up okay.

Here is how it works. The patch introduces a new attribute - pwdMaxTotalAttempts. Quoting from the README, 'The attribute may take one of three values. If pwdMaxTotalAttempts is zero (0) or not defined then no repeat password checking is perfomed. If pwdMaxTotalAttempts is -1 repeat password checking is performed and an unlimited number of attempts with any number (up to the limit defined by pwdMaxFailure) of repeat passwords are allowed'.

To disable this new behavior you don't have to do anything (i.e. pwdMaxTotalAttempts is not even defined). Also, explicitly setting pwdMaxTotalAttempts to 0 disables it. If you set it to -1, the new policy is enabled and repeat password attempts are tracked. Setting it to a positive number enables the policy as well, but also gives you some limited DoS protection. There are some risks to enabling the new module - it keeps track of your failed passwords (as SSHA hashes). So, proceed with caution when you enable the module.

HTH someone out in the ether.




ldap acls - locking accounts out

2009-04-24T00:54:49.085-07:00

One of the common questions/situations folks face in an ldap implementation is implementing some sort of locking mechanism for old accounts. We use the employeeType attribute in the inetOrgPerson schema. You could probably use any similar attribute (or even a custom one). One way to implement this locking is to add checks in your application code to remove such accounts. This strategy is bound to leak stuff (either due to application problems, or coding errors), sometimes this data may simply have multiple access points - through a public address book, or through folks directly accessing the directory data. A better way (imo) is to filter these accounts within the LDAP server itself. OpenLDAP allows you to set acls on specific filters. This works beautifully for cases like this. As an example, we have this acl in our slapd.conf file.

access to dn.children="ou=People,dc=mozilla" filter=(!(|(employeeType=Contractor)(employeeType=Employee)))
by group="cn=admins,ou=ldapgroups,dc=mozilla" write
by * none

This blocks out accounts with any other employeeType (other than employee or contractor) from everyone, except the administrators. Of course, this depends on you setting the employeeType attribute to some appropriate value (like retired) on inactive accounts.



iSCSI - fast, cheap and reliable?

2009-04-21T08:44:24.699-07:00

iSCSI is often touted as the new way forward for enterprise storage needs. This is mostly a rant/information about what I have found useful so far.

To begin with iSCSI support in linux is pretty good and you will find a lot of resources on how to set it up etc. Here are the quick steps.
  • yum install iscsi-initiator-utils(or whatever tool you use to install stuff)
  • chkconfig iscsid on; chkconfig iscsi on (or update-rc)
  • echo "iscsi initiator name" > /etc/iscsi/initiatorname.iscsi (but put the real name in, make one up thats unique to this host). Make sure that the iscsi volume you want to mount now allows this initiator name to connect to it.
  • service iscsid start
  • iscsiadm -m discovery -t sendtargets -p IPofIScsiHost
  • service iscsi start (now you should see a device sdx appear in dmesg)
  • when you put it in /etc/fstab, you need "_netdev" in the options so that it doesn't try to mount it before the network is available.
(you can get fancy with CHAP initiator names etc, but for most cases they aren't needed).

That should present an iSCSI block device to your system. Check your dmesg for the device name. Unfortunately, I was unable to find a simple iSCSI command that lists the mapped device name and the corresponding iSCSI lun information. In the old days (with the cisco? iSCSI tools), there used to be a nice little iscsi-ls command that would list all your iSCSI devices and where they came from etc. You now have to manually map /proc/scsi/scsi entries with the /sys/class/scsi_disk/lun-number/ and dig for the entries that way.

I have found that iSCSI in general is not all that fast. You can get decent performance out of it, but nothing close to dedicated local disk and a good hardware raid solution. I have seen asymmetric read-write performance on iscsi disks - with writes being a lot faster than reads. Also, don't waste your money on hardware iSCSI initiators. The don't help a whole lot with speed and are a pita to maintain (with out-of-kernel drivers in some cases). Most modern servers have plenty of horsepower to drive iscsi using the CPUs. I have heard that running iscsi on a 10Gbps network helps (we run it on a 1 Gbps network) with the performance.

iSCSI is in general cheaper than comparable full SAN solutions, but, performance does suffer. This is okay for most cases, except maybe for i/o heavy db applications etc. Choosing the right vendor for your NAS device matters a lot. Don't believe the specs you get from the vendors, try a few of them, perform some tests (bonnie, plain old dd, iozone) and decide how much you want to sink into it.

HTH someone out in the ether!