Subscribe: Rope on Fire
http://ropeonfire.blogspot.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
add copy  add  call  copies mnesia  file  grails  intrepid  mnesia add  mono  new  node  packages  var packages  var 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Rope on Fire

Rope on Fire





Updated: 2015-09-16T12:50:49.898-04:00

 



Upgrading to Grails 2.0

2011-12-26T23:33:56.435-05:00

With the recent release of grails 2.0, I upgraded OpenArc's ILocker and E-Arc software tonight. Ran into a few hurdles along the way and thought sharing them here might help someone else in the future.

First, I ran into some dependency conflicts and had to add some new lines to grails-app/conf/BuildConfig.groovy:
+      runtime ('edu.ucar:netcdf:4.2-min') {
+ excludes 'slf4j-api', 'slf4j-simple'
+ }
+
+ runtime ('org.apache.tika:tika-parsers:0.10') {
+ excludes "commons-logging", "commons-codec"
+ }
+
+ runtime ('org.xhtmlrenderer:core-renderer:R8') {
+ excludes "itext", "commons-logging", "commons-codec"
+ }
In particular, lots of trouble with "commons-logging" and "slf4j".

Secondly, I'm using java7 (1.7.0_b147) and was getting the error "javac: target release 1.6 conflicts with default source release 1.7" so I add to throw:

grails.project.source.level = 1.6
into grails-app/conf/BuildConfig.groovy as well.

Finally and most perplexingly, I got everything running, but when I went to browse the application I just got an empty blank page, no error messages, nothing, just a blank page. Uggh. Turns out you must run:
grails install-templates
if you've installed the templates previously. It's documented in the upgrade notes - would have been a nice thing to throw up a warning about too.



JQuery Mobile for web-based forms applications

2011-12-26T23:17:30.903-05:00

As a consulting company, OpenArc, does a fair number of web-based forms applications for customers across a wide range of industries. In the last several months, we've really taken to using jQuery Mobile (JQM) for many of these applications.As is typical for applications of this type, a clean and usable interface is far more important to our customers than a sexy/flashy look and feel. JQM gives us an easy framework to produce a modern looking UI, tailored with the jQuery Mobile Themeroller, customized to match the client's brand requirements. Our clients are also very happy to know their applications can be accessed from a wide array of mobile devices.Here's a few screenshots from just one of these applications:First, a dashboard of sorts:A listing page, with the ever so useful "data-filter: true" attribute:Finally, a meeting edit page showing a time picker control, still in progress:We've had a few glitches along the way, but in general, our clients are very pleased with a JQM based UI. This makes us very happy too! One issue we saw early on was the default ajax-based navigation not playing well on IE*, so for now we've disabled it via:$.mobile.ajaxEnabled = false; $.mobile.pushStateEnabled = false;Normally, dialog boxes in JQM do not require full HTML pages, just HTML snippets (e.g. :layout => nil) as they are loaded via AJAX and work like jQuery UI dialogs.However, when you set "$.mobile.ajaxEnabled = false" JQM will no longer load dialogs via ajax, EVEN if you set "data-ajax=true" on the dialog links. That seems like a bug to me, ignoring "data-ajax=true". A patch to fix the problem:diff --git js/jquery.mobile.navigation js/jquery.mobile.navigationindex f85a491..181b9c9 100755--- js/jquery.mobile.navigation+++ js/jquery.mobile.navigation@@ -1322,10 +1322,11 @@ var baseUrl = getClosestBaseUrl( $link ), //get href, if defined, otherwise default to empty hash- href = path.makeUrlAbsolute( $link.attr( "href" ) || "#", baseUrl );+ href = path.makeUrlAbsolute( $link.attr( "href" ) || "#", baseUrl ),+ isTargetDialog = $link.data("rel") === "dialog"; //if ajax is disabled, exit early- if( !$.mobile.ajaxEnabled && !path.isEmbeddedPage( href ) ){+ if( !$.mobile.ajaxEnabled && !isTargetDialog && !path.isEmbeddedPage( href ) ){ httpCleanup(); //use default click handling return;My only hope at this point is to see an expanded set of controls/plugins, ideally such that we'd no longer have need of jQuery UI.[...]



Grails and JCifs

2010-10-08T12:41:20.973-04:00

JCIFS is:
...an Open Source client library that implements the CIFS/SMB networking protocol in 100% Java. CIFS is the standard file sharing protocol on the Microsoft Windows platform.
As part of a project to provide schools and businesses with an open source solution to access their "My Documents" folder anytime/anywhere over the web, I recently had the pleasure of integrating JCIFS into my Grails application.

The obligatory screenshot:

(image)
I dropped the latest JCIFS jar file into my $GRAILS-APP/lib folder, and began implementing the "My Documents" feature against a samba server for starters. When I moved to a Windows 2008 server everything fell apart, with all operations started timing out. After some digging around in the rather extensive set of config options, I realized I need the following in my grails config file:
System.setProperty("jcifs.smb.client.dfs.disabled", "true");
Your environment may differ but make sure you take a good look at the JCIFS configuration options at least.

Ok, so here's a simple example of removing a file:
  void removeFile(WorkspacePath p)
{
def ntlm = new NtlmPasswordAuthentication("", p.username, p.password);
SmbFile file = new SmbFile(absoluteFilePath(p.url, p.path), ntlm);
file.delete();
}
Note: I pass "" as the first argument to NtlmPasswordAuthentication as the domain is part of p.username (e.g. joel@example.com).

One thing you need to make sure of is always ending directory paths with a "/", otherwise you will get errors. Here's a more complicated example of a "eachFile" method that takes a closure as it's final argument:
  public void eachFile(WorkspacePath p, Closure c)
{
println "eachFile ${p.url} - ${p.path}";
def path = absoluteDirPath(p.url, p.path);
def ntlm = new NtlmPasswordAuthentication("", p.username, p.password);
SmbFile file = new SmbFile(path, ntlm);

// are we dealing with a directory path or just a single file?
if (!file.isDirectory()) {
c.call([name: file.name, file: file, path: file.canonicalPath,
inputStream: { return new SmbFileInputStream(file); },
outputStream: { return new SmbFileOutputStream(file); }
]);
return;
}

file.listFiles().each {
f-> if (f.isDirectory()) return;
if (f.isHidden()) return;

c.call([name: f.name, file: f, path: f.canonicalPath,
inputStream: { return new SmbFileInputStream(f); },
outputStream: { return new SmbFileOutputStream(f); }
]);
}
}
We've been quite pleased with JCIFS and it well its been working in our grails application. We are currently using 1.3.14 with the patches noted here. I just noticed that 1.3.15 is out so I'm interested in trying that as soon as possible!



Grails and JackRabbit

2010-10-01T11:42:39.424-04:00

Here's a brief overview of I plugged JackRabbit, a fully conforming implementation of the Java Content Repository specifications, into several of the Grails based projects I've been working on recently.Currently, I'm using JackRabbit for user editable page content. Perhaps overkill, but I have plans to leverage additional JackRabbit features down the road.First off, there is a Grails JackRabbit plugin, but it looked rather old and un-maintained and had no real documentation, so I just rolled my own solution.Ok, so first, drop the jackrabbit jars into your $PROJ/lib/ folder.(~/src/ilocker) ls -1 lib/jackrabbit-api-2.1.1.jarjackrabbit-core-2.1.1.jarjackrabbit-jcr-commons-2.1.1.jarjackrabbit-jcr-server-2.1.1.jarjackrabbit-spi-2.1.1.jarjackrabbit-spi-commons-2.1.1.jarjcr-2.0.jarAn improved approach would be to add the appropriate directives to grails-app/conf/BuildConfig.groovy. But for now, this will work.Next you'll need an appropriately configured JackRabbit repository.xml file. I configured JackRabbit with a Postgresql DbDataStore. A sample of my configuration can be found here.So how to get started? I created a grails-app/service/ContentService.groovy, that starts out like this:import org.springframework.beans.factory.InitializingBean;import javax.jcr.Repository;import javax.jcr.Session;import javax.jcr.SimpleCredentials;import javax.jcr.Node;import org.apache.jackrabbit.core.TransientRepository;class ContentService implements InitializingBean{ static scope = "singleton"; def grailsApplication; Repository _repository; public void afterPropertiesSet() { def jcr = grailsApplication.config.jcr; _repository = new TransientRepository(jcr.repo.config, jcr.repo.home); log.info "Configuring Content Service ... config=${jcr.repo.config}, home=${jcr.repo.home}"; }My grails-app/conf/Config.groovy file has the following entries:jcr.repo.home = "/var/lib/ilocker"jcr.repo.config = "/etc/ilocker/repository.xml"So the line_repository = new TransientRepository(jcr.repo.config, jcr.repo.home); above wires everything up to use /etc/ilocker/repository.xml and to set ${rep.home} = /var/lib/ilocker. Make sure the tomcat user has appropriate access to /var/lib/ilocker when you put the site into production!Getting JackRabbit to work first time around can be a little dicey, because JackRabbit will copy the repository.xml to ${rep.home}/workspaces. If anything is misconfigured, it's easiest to just change repository.xml, delete ${rep.home}/workspaces, and try again. If you don't delete ${rep.home}/workspaces, your changes to repository.xml will have no effect (unless you create a new workspace). Take note!Now to write content to our ContentService, I'm using: public void put(String controller, String action, String data) { Session session = _repository.login(new SimpleCredentials("username", "password".toCharArray())); log.info "ContentService.put ${controller} ${action}"; try { Node controllerNode = getControllerNode(session, controller); Node node = getActionNode(controllerNode, action); Calendar lastModified = Calendar.getInstance(); node.setProperty("jcr:lastModified", lastModified); node.setProperty("jcr:mimeType", "text/html"); node.setProperty("jcr:encoding", "utf-8"); node.setProperty("jcr:data", data); session.save(); } finally { session.logout(); } }Obviously, completely ignoring JackRabbit level security. To read content in my controllers, I write code like this for example:class AdminController { def contentService; def index = { String content = contentService.get(controllerName, actionName); [ chtml: content ] }And then in ContentService.groovy I have: public String get(String controller, String action) { Session session = _repository.login(new SimpleCredentials("username", "password".toCharArray())); String value; log.info "ContentService.get ${controller} ${action}"; try { Node controllerNode = getControllerNode(session, controller); Node actionNo[...]



Updated NetCenter Screenshots

2009-08-12T22:38:01.959-04:00

Ok, no reams of code in this post, just some recent screenshots of NetCenter, an ajax rich jquery/Grails based CRM I've been working on. Most of the icons below come from the CrystalClear icon set on wikimedia.

This first shot shows our TODO manager rollup/down side bar:

(image)
And the asset management module:

(image)
Who are those cute kids ;-) ?

And finally, the document management accordion panel for an account:

(image)
Seeing the product live is far more impressive - new tab load speed, yahoo map popups, click to call - but hopefully these screenshots give you a sense of the general UI layout of NetCenter. This is really the first time I've down a tab oriented layout but I thought it would be the best design for a web-based CRM solution where you are jumping around alot, with multiple ways to get to the same information, but don't want to lose your place.



Document Management in NetCenter

2009-07-16T15:28:47.700-04:00

Although our mid to long term plans for NetCenter365 include Sharepoint and Alfresco integration, we currently provide a more streamlined, account oriented, document management capability within NetCenter that we think might better serve some organizations.

Documents in NetCenter are attached to customer records or accounts. Here's a screenshot:

(image)
On the backend, I created a C++/FUSE based filesystem. When you mount it you see a list of customer names as directories, under which documents attached to the accounts are found. This metadata is stored in the NetCenter database while the actual file contents are simply stored in a backing ext3 filesystem. This way it's easy to backup and restore, replicate, etc. Here's a snippet from account_node::readdir()
 int account_node::readdir(void *buf, fuse_fill_dir_t filler, off_t offset, struct fuse_file_info *fi)
{
filler(buf, ".", NULL, 0);
filler(buf, "..", NULL, 0);

pqxx::connection db(connect_string());
pqxx::nontransaction work(db);
pqxx::result result = work.exec("SELECT name,id,trunc(date_part('epoch',last_updated)),path FROM document where account_id=" + id());

std::string did; long lctm; std::string rpath;
for (pqxx::result::const_iterator r = result.begin(); r != result.end(); ++r)
{
filler(buf, r[0].c_str(), NULL, 0);
did = r[1].c_str();
r[2].to(lctm);
rpath = r[3].c_str();

std::string path = _path + "/" + r[0].c_str();
_filesystem->set_attributes(path, attributes(did, lctm, rpath));
}

return 0;
}
Whereas the code to read the actual file contents, looks something like this:
 int poi_node::open(struct fuse_file_info *fi)
{
std::string fpath = full_path();

int res = ::open(fpath.c_str(), fi->flags);
if (res == -1)
return -errno;

::close(res);
return 0;
}
With the virtual filesystem mounted, we simply serve it up via Apache webdav and since we store the document metadata in the NetCenter database it's very easy to provide the frontend UI via grails.

As far as the frontend goes, one big complaint we've heard about other document management solutions is how confusing it is for some users to download a file, find it on their hard drive, edit it, go back to their browser, and upload a new version. That's a very frustrating set of steps for many users.

We built a very simple JetPack based extension for Firefox that registers a "webdav://" protocol handler that passes off such links to OpenOffice which already knows how to handle them properly such that there is no downloading, finding, editing, and re-uploading. OpenOffice will directly save the document back to our Apache webdav server that sits on top of the NetCenter virtual filesystem discussed above.

For Internet Explorer, we wrote a small C# based protocol handler that does almost the same thing but handles Microsoft Word or OpenOffice. Not quite as nice as the Firefox solution, but we can push out the MSI via AD group policy.



Grails, jQuery, and Yahoo Maps

2009-07-14T15:51:44.507-04:00

I recently completed a new NetCenter365 feature that uses Yahoo Maps to show the location of all current customers. Here's a screenshot:

(image)
I really appreciate Yahoo's "Maps Web Services" which include a helpful geolocation service.

First, we map out HQ with:
 var map = new YMap(document.getElementById('map'));
map.addTypeControl(); map.addZoomLong(); map.addPanControl();
map.setMapType(YAHOO_MAP_REG);

var hq = new YGeoPoint(HQ.latitude, HQ.longitude);
map.drawZoomAndCenter(hq, 11);
Then we use grails and jquery to loop through every customer and fire off the following ajax requests:
 var url = '${createLink(controller: "location", action: "latlong")}' + "/";


$.getJSON(url + ${account.id}, function(x) {
var pt = new YGeoPoint(x.latitude, x.longitude);
var m = new YMarker(pt);
m.addAutoExpand('${account.name.encodeAsJavaScript()}');
map.addOverlay(m);
});

The heart of the location/latlong method uses Yahoo's geolocation services. Here's a snippet of the groovy code:
 def geocoder = "http://local.yahooapis.com/MapsService/V1/geocode?appid=${APPID}"
if (account.line1) geocoder += "&street=" + URLEncoder.encode(account.line1);
if (account.city) geocoder += "&city=" + URLEncoder.encode(account.city);
if (account.state) geocoder += "&state=" + account.state;
if (account.zip) geocoder += "&zip=" + account.zip;

def xml = geocoder.toURL().text
def records = new XmlParser().parseText(xml);
location.latitude = records.Result[0].Latitude.text()
location.longitude = records.Result[0].Longitude.text()
Performance wise, the map pops up quite quickly and the markers appear in rapid procession. This is aided by caching Lat/Long info to minimize geolocation requests.



CRM Integration via LDAP

2009-06-01T10:57:14.977-04:00

Our vision for NetCenter is to facilitate and drive a customer centric view of day to day activities within an organization. Whether you're in sales, engineering, administration, or elsewhere, we want to help organize your documents, emails, phone calls, projects, and other day to day activities in a customer centric way.

We also want a platform that's easy to use and integrates well into existing business systems.

As part of this effort, I recently completed exposing NetCenter contacts to mail clients like Zimbra, Outlook, and Thunderbird via a custom OpenLDAP backend.

All of these mail clients can leverage LDAP based address books, so we expose NetCenter contacts via LDAP so that you can quickly and easily send emails to prospective and current customers. Here's a screenshot from Outlook:

(image)
And Zimbra:

(image)
There's no real documentation on how to create a custom backend, but the back-null and back-shell backends are pretty good places to start.



Incoming call screen pops with sipX, rabbitMQ, and Adobe Air

2009-04-24T10:06:58.727-04:00

I just finished the first beta of NetCenterPlus, an Adobe Air html based tray application that presents screen pops for incoming calls on sipX systems. NetCenterPlus is part of NetCenter, a CRM/Business Productivity solution from NetServe365.Here's a screenshot of the notification window on an incoming call. On the backend, I implemented a solution very similar to the one I did for Integrating sipx with ejabberd. There are two database triggers installed into the SIPXCDR database, the second of which is a PostgreSQL plperlu trigger which uses Net::Stomp to send a message to our rabbitMQ server indicating the callerId of an incoming call to the user registered for the destination extension. Not many of lines of code:CREATE FUNCTION cse_ncplus_change() RETURNS trigger AS $end$use Net::Stomp;my ($domain, $uid, $pwd) = @{$_TD->{args}};my $msg = TD->{"new"}{"from_id"};my $stomp = Net::Stomp->new({hostname=>'mq.nvizn.com', port=>'61613'});$stomp->connect({login=>$uid, passcode=>$pwd});my $uid = $_TD->{"new"}{"username"};$stomp->send({destination=>"/$domain/ncplus/$uid", body=>($msg)});$stomp->disconnect;return undef;$end$LANGUAGE plperlu;The other plpgsql trigger looks up the destination extension and munges up a nice looking incoming call number. That exercise is left to the reader.Now we've got a message on a per-user queue for every incoming call on our sipX system. So what next?I wanted an easy to deploy, cross platform, tray application that would listen for incoming messages on present the screen pop. I looked at Mozilla Prism, Silverlight, and Adobe Air. Air was not my first choice to be honest, but the Prism project seems to have stagnated afaict, and Silverlight 2.0 on Linux doesn't look like it will be out anytime soon, so I went with Air. After spending some time with the product, I've definitely grown in my appreciation of its ease of use and design. It's really nice to be able to leverage existing web development skills to build these type of applications.So what does the Air application do? First of it, I used air.Socket and javascript to implement a STOMP client. First the connection code:air.trace("setting up MessageQueue..."); this.socket = new air.Socket(); var self = this;this.socket.addEventListener(air.Event.CONNECT, function(event) { self.sendCommand("CONNECT\nlogin:guest\npasscode:" + password + "\n\n"); self.state = self.STATE.CONNECT; });The main listener loop looks something like this:this.socket.addEventListener(air.ProgressEvent.SOCKET_DATA, function(event) { switch (self.state) { case self.STATE.CONNECT: self.subscribe(); break; case self.STATE.READY: var data = event.target.readUTFBytes(event.target.bytesAvailable); var lines = data.split("\n"); if (lines[0] == "MESSAGE" && lines.length>5) { msg_callback(lines[6]); } break; } });So a NetCenterPlus user installs the application via a web page (yet to be prettied up!).Then, the user enters their NetCenter username and password (again, this dialog needs some UI love. Did I mention I'm not a graphic artist?):You can read and write to a local encrypted store in Air via functions like this:readFromLocalStore = function(key, defstr) { var item = air.EncryptedLocalStore.getItem(key); if (item == null) return defstr; return item.readUTFBytes(item.length); } saveToLocalStore = function(key, value) { var bytes = new air.ByteArray(); [...]



NetCenter Click to Call

2009-04-07T17:33:46.511-04:00

I just completed adding "Click to Call" functionality to NetCenter. Since this is a bit difficult to demonstrate with screenshots, I made a YouTube video instead.I implemented "Click to Call" using Aloha and RabbitMQ, testing the solution on sipX.I have a grails PlaceCallService that sets up a connection to our RabbitMQ instance like this: static transactional = false; ConnectionParameters connectionParameters; ConnectionFactory connectionFactory; ConfigObject config = ConfigurationHolder.config; MessageQueueService() { connectionParameters = new ConnectionParameters(); connectionParameters.setUsername(config.mq.username); connectionParameters.setPassword(config.mq.password); connectionFactory = new ConnectionFactory(connectionParameters); }The actual message publish function looks something like this: def publish(message) { try { Connection conn = connectionFactory.newConnection(config.mq.host, AMQP.PROTOCOL.PORT); Channel ch = conn.createChannel(); ch.queueDeclare(config.placeCall.routingKey); ch.basicPublish("", config.placeCall.routingKey, null, message.getBytes()); ch.close(); conn.close(); } catch (Exception e) { log.error("Main thread caught exception: " + e); return false } return true }Then in a "Third Party call initiator daemon", I unpack the message and use the Aloha stack to do a try { OutboundCallLegBean outboundCallLegBean = (OutboundCallLegBean) applicationContext.getBean("outboundCallLegBean"); CallBean callBean = (CallBean) applicationContext.getBean("callBean"); callBean.addCallListener(this); // create two call legs String callLegId1 = outboundCallLegBean.createCallLeg(URI.create(callee), URI.create(caller)); String callLegId2 = outboundCallLegBean.createCallLeg(URI.create(caller), URI.create(callee)); // join the call legs System.out.println(String.format("connecting %s and %s in call...$ System.out.println(callBean.joinCallLegs(callLegId1, callLegId2))$ }This chunk of code is based on the helpful Third Party Call sample from Aloha's subversion repository.Anyway, it's working well, consumes minimal resources on the web server (just post message to the "/placeCall/request" queue), and only took a few days to setup and deploy into production. Many thanks to the Aloha team, RabbitMQ folks, and sipX gurus.[...]



NetCenter CRM

2009-03-06T08:07:15.942-05:00

For the last month or so, I've been working on "NetCenter" a Grails 1.1 based CRM system that will integrate with sipX for call detail records, Zimbra or Exchange 2007 for email, calendaring, and time tracking purposes, and finally Alfresco or Sharepoint for document management.I've really enjoyed using Grails - its a real productivity booster and I really appreciate the Separation of concerns you get with an MVC framework. I completed the sipX integration first and am now working with Exchange 2007 Web Services so that users can associate meetings with accounts and mark them billable/non-billable.First a few screenshots, then a brief overview of the sipx integration. Note: in the screenshots below the account and contact information is randomly generated test data, while the call records are real records coming out of our production sipX server.Call Manager:Account Calls:Contact Calls:I used the Grails Quartz Plugin and added a grails-app/jobs/CdrSyncJob.groovy that looks at licensees with registered sipX servers and then queries with sipX instance for call detail records that have not yet been processed.I wanted call detail report generation to be as fast as possible, so the CdrSyncJob looks up the sipX callee and caller phone numbers against the contact table and licencedUser table then writes a new "call" record into the NetCenter database and marks the sipX call record has having been processed so it can be ignored the next time the job runs. Now whenever anyone wants to view all calls made to any contact within a certain account, its a simple database query that has a few joins and doesn't involve any phone number normalization, determining whether a call is related to any known contact, ignoring interoffice calls, or figuring out the call direction.Here a few snippets for CdrSyncJob. First the execute() method:def execute() { if (Environment.current == Environment.DEVELOPMENT) return def licensees = Licensee.withCriteria { eq("active", true) isNotNull("sipHost") } licensees.each { syncCdrs(it); }}Then syncCdrs begins with some Groovy SQL like this: def cdr = Sql.newInstance("jdbc:postgresql://${licensee.sipHost}/SIPXCDR", "username", "password", "org.postgresql.Driver") cdr.eachRow("select * from view_call_records A, cdrs_sync B where A.id=B.id and NOT(B.done)")Hmmm, I guess I should point out that view_call_records and cdrs_sync are custom tables. Here's the SQL:CREATE VIEW view_call_records as select id, SUBSTRING(caller_aor FROM '.*.*') as caller, LTRIM(LTRIM(SUBSTRING(callee_aor FROM '.*.*'), '8'), '1') as callee, connect_time as start_time, to_char(cdrs.end_time-cdrs.connect_time, 'MI') AS minutes, to_char(cdrs.end_time-cdrs.connect_time, 'SS') as seconds from cdrs where cdrs.termination != 'F' and cdrs.connect_time IS NOT NULL;CREATE TABLE cdrs_sync ( id integer PRIMARY KEY, done boolean DEFAULT FALSE);Anyway, the rest of syncCdrs is just about ignoring interoffice calls or calls to contacts with don't have on record, then adding new entries to the NetCenter call table:new Call(callDirection: direction, callId: it.id, contact: contact, dateStarted: it.start_time, minutes: it.minutes, seconds: it.seconds, licensee: licensee, owner: owner).save();and marking the call as processed in the cdrs_sync table.Next time I get a chance to blog, I hope to show the Exchange integration and some jQuery snippets. jQuery has been a big productivity booster as well. Web development has come along way![...]



Integrating sipX with ejabberd

2009-01-13T11:35:30.694-05:00

I recently completed integrating our sipX based voip platform with our ejabberd XMPP server, so that users can see when others are on the phone or not. There are alot of similar integrations that people have done with Asterisk using their AMI api, but I haven't found anything similar for sipX yet, so we rolled our own for now. While, it's not terribly exciting, here's a screenshot of what it looks like when someone is on the phone:The solution I came up with involves 3 parts. First, I setup a clustered RabbitMQ server (an open source implementation of AMQP). I plan on using it to facilitate a loosely coupled, event driven architecture for integrating multiple open sourceapplications. I'm pretty happy with RabbitMQ thus far - about the only complaint I have is that they don't have any message tracing capabilities right now (version 1.5.0) which made it more difficult to debug my client side code. I'm also hoping that sometime soon we start seeing debian packages for python/perl amqp libraries. For now, I'm using Net::Stomp and the RabbitMQ stomp adapter which seemed like the most stable, easily deployed client side solution.On the XMPP server side, I created an erlang module that acts as a message consumer. Each virtual host in our ejabberd server listens on a separate queue for presence messages generated by the sipX side and sends out XMPP presence updates to online sessions.After getting the RabbitMQ erlang client library installed, here's the code I used to connect and setup my consumer:Connection = amqp_connection:start(Uname, Pwd, "mq.nvizn.com"),Channel = amqp_connection:open_channel(Connection), Qname = list_to_binary("/" ++ Host ++ "/presence/phone"),Q = lib_amqp:declare_queue(Channel, Qname),lib_amqp:bind_queue(Channel, , Q, Qname),lib_amqp:subscribe(Channel, Q, self(), false),Then I created a handle_info function that looks like this:handle_info({ {'basic.deliver', DeliveryTag, _, _, _, _ }, {content, ClassId, Properties, PropertiesBin, [Payload]} = Info}, State) ->%% Message processing here, then send out the XMPP presence update...,BroadcastPresence = fun({U, S, R}) -> Dest = jlib:make_jid(U, S, R), ejabberd_router:route(FromJID, Dest, Presence) end, Sessions = ejabberd_sm:get_vh_session_list(State#state.host),lists:foreach(BroadcastPresence, Sessions),Now on the sipX side, things are a bit more ugly, and when I have more time later, I'd like to rework this end. For now, I created a PL/pgSQL AFTER trigger on SIPXCDR.call_state_events table that handles new call state events ('S' and 'E' event_types to be specific). This trigger inserts new rows into a new cse_summary table I created for every call, one for when the call is setup and one for call termination and it does this for each internal user. If the call involves two internal folks, you end up with 4 rows, if on the other hand, one side is external, you end up with only 2 rows. This trigger also looks up the XMPP jid for the extension and records that in the generated cse_summary rows. When a row is created in the cse_summary table, a separatePL/Perl AFTER trigger uses Net::Stomp to generate a call stateevent message for the RabbitMQ cluster.Here's what the PL/Perl trigger looks like:my $stomp = Net::Stomp->new({hostname=>'mq.nvizn.com',port=>'61613'});$stomp->connect({login=>$uid, passcode=>$pwd});my $msg = sprintf("%s,%s,%s", $domain, $_TD->{"new"}{"event_type"}, $_TD->{"new"}{"jid"});$stomp->send({destination=>"/$domain/presence/phone", body=>($msg)});$stomp->disconnect;Now, I'm just creating some debian packages and RPMs (for the sipX side), documenting how it w[...]



Load Balance Clustered Ejabberd Servers

2008-12-06T15:39:59.614-05:00

I recently completed setting up our XMPP infrastructure. After spending some time reviewing the current capabilities of jabberd2, openfire, djabberd, and ejabberd, I decided that ejabberd had the best combination of features for our needs: virtual hosting, LDAP integration, clustering support, shared rosters, and reasonably good documentation!So after setting up the first ejabberd node (im1), with a test virtual host and working LDAP integration, I setup our second ejabberd node (im2) by copying /etc/ejabberd/ejabberd.cfg to the 2nd node, then running through the following steps:First launch an erlang shell as the ejabberd user, with erl -sname ejabberd@im2 -mnesia extra_db_nodes "['ejabberd@im1']" -s mnesiaThen, to replicate all ejabberd tables in my configuration, I ran a: mnesia:change_table_copy_type(schema, node(), disc_copies).mnesia:add_table_copy(offline_msg,node(),disc_only_copies). mnesia:add_table_copy(privacy,node(),disc_copies). mnesia:add_table_copy(sr_group,node(),disc_copies). mnesia:add_table_copy(sr_user,node(),disc_copies). mnesia:add_table_copy(roster,node(),disc_copies). mnesia:add_table_copy(last_activity,node(),disc_copies). mnesia:add_table_copy(disco_publish,node(),disc_only_copies). mnesia:add_table_copy(pubsub_node,node(),disc_copies). mnesia:add_table_copy(pubsub_state,node(),disc_copies). mnesia:add_table_copy(pubsub_item,node(),disc_only_copies). mnesia:add_table_copy(session,node(),ram_copies). mnesia:add_table_copy(s2s,node(),ram_copies). mnesia:add_table_copy(route,node(),ram_copies). mnesia:add_table_copy(iq_response,node(),ram_copies). mnesia:add_table_copy(caps_features,node(),ram_copies). mnesia:add_table_copy(motd_users,node(),disc_copies). mnesia:add_table_copy(motd,node(),disc_copies). mnesia:add_table_copy(acl,node(),disc_copies). mnesia:add_table_copy(config,node(),disc_copies).After you quit the shell, you'll most likely need to move the result mnesia database files to the ejabberd user's $HOME folder.Once, both nodes were working correctly I setup a LVS-DR load balancer with ldirectord. This proves to be rather straightforward.First the realservers (each ejabberd instance, im1 and im2) had to configured with a local interface that listens to the load balancer's VIP (virtual IP). The most reliable way I found to set this up was with a simple ip addr add 172.16.254.60/32 brd + dev lo label lo:vip in /etc/rc.local. Then I setup a /etc/sysctl.d/60-ipvs-arp-rules.conf with net.ipv4.conf.eth0.arp_ignore = 1net.ipv4.conf.eth0.arp_announce = 2net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2On Ubuntu (and I think debian as well), you must also tweak /etc/sysctl.d/10-network-security.conf to disable source address validationnet.ipv4.conf.default.rp_filter=0net.ipv4.conf.all.rp_filter=0 That's pretty much it for the realservers.Setting up the loadbalancer involves setting up the VIP in /etc/network/interfacesauto eth0:vip0iface eth0:vip0 inet static address 172.16.254.60 broadcast 172.16.254.60 netmask 255.255.255.255Then setting up ldirectord (apt-get install ldirectord) in /etc/ldirectord.cf with /etc/ldirectord.cf# Global Directiveschecktimeout=3checkinterval=15autoreload=yeslogfile="/var/log/ldirectord.log"logfile="local0"emailalert="joel.reed@nvizn.com"emailalertfreq=3600emailalertstatus=allquiescent=yesvirtual=172.16.254.60:5222 real=172.16.254.70:5222 gate real=172.16.254.72:5222 gate scheduler=wlc protocol=tcp checktype=negotiate service=simpletcp request="junk" receive="jabber.org"It'd be really cool if there was some kind of builtin heathcheck call you could do on an ejabberd node, but alas there isn't so I just send it a string of garbage ("junk" to be exact), and look for the jabber.org string in the XMPP respo[...]



Alfresco on EC2

2008-11-03T10:29:47.762-05:00

Over the weekend, I created a Alfresco Labs 3b AMI on EC2, Amazon's cloud computing platform.

I took one of the Alestic Ubuntu 8.10 base images, added my own ec2-tools_0.1.deb package, and built out an AMI with Labs 3b running on the system tomcat5.5, instead of the bundled tomcat instance. That part was far more brutal than using EC2. You have to make quiet a few changes to the catalina policy to get things working.

I made an Alfresco package, that installs an /etc/tomcat5.5/policy.d/60alfresco.policy file that looks like this:
grant { 
permission java.lang.RuntimePermission "accessClassInPackage.org.apache.*";

permission java.lang.RuntimePermission "accessDeclaredMembers";
permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
permission java.util.PropertyPermission "alfresco.jmx.dir", "read,write";
permission java.util.PropertyPermission "webapp.root", "read,write";
permission java.io.FilePermission "/usr/share/java/servlet-api-2.4.jar", "read";
};

grant codeBase "file:${catalina.home}/bin/tomcat-juli.jar" {
permission java.io.FilePermission "/usr/share/tomcat5.5/webapps/alfresco/WEB-INF/classes/logging.properties", "read";
permission java.io.FilePermission "/var/lib/tomcat5.5/temp/-", "read,write,delete,execute";
permission java.io.FilePermission "/var/lib/tomcat5.5/temp", "read,write,execute";
}
All of my AMIs have a rebundle.sh script that can quickly upload an updated AMI. It looks something like this:
#!/bin/sh
ACCOUNTID=xxxxxx
CERTFILE=/etc/ec2/xxxxxxx.pem
KEYFILE=/etc/ec2/xxxxxxx.pem
ACCESSKEY=xxxxxxxxxxx
SECRETKEY=xxxxxxxxxx

umount /var/local
ec2-bundle-vol -u $ACCOUNTID -c $CERTFILE -k $KEYFILE -p ubuntu-8.10-appsuite-1.0-20081101 --ec2cert /etc/ec2/amitools/cert-ec2.pem -r i386
ec2-upload-bundle -b nvizn.com -m /tmp/ubuntu-8.10-appsuite-1.0-20081101.manifest.xml -a $ACCESSKEY -s $SECRETKEY
ec2-register nvizn.com/ubuntu-8.10-appsuite-1.0-20081101.manifest.xml
This made life a bit easier as I made changes to the image and uploaded them. I unmount /var/local at the start of the script as that's where I mount my EBS volume.



Samba4 on Ubuntu Intrepid

2008-10-20T21:29:08.889-04:00

Here's a brief rundown of my experiences with Samba4 on Ubuntu Intrepid.

I first tried the samba4 package in the ubuntu intrepid repositories, but when you do a
./setup/provision --realm=azulogic.com --domain=azulogic --adminpass=fubar --server-role='domain controller'
you get a python stackdump with
IOError: [Errno 2] No such file or directory: '/usr/etc/samba/smb.conf'
I tried creating a "/usr/etc/samba" folder (though the distaste was high), but then proceeded to get further file path errors.

So, next I switched to the Debian Experimental package. This worked much better.

After you apt-get install the package, you'll have to fixup /etc/init.d/samba4 - it's still looking for smbd (the samba3 daemon), whereas in samba4 its now /usr/sbin/samba.

So, I just did a
ln -s /usr/sbin/samba /usr/sbin/smbd
to get it to work.

After getting krb5, dns, and samba ready to go, I tried to join a linux machine running winbind 2:3.2.3-1ubuntu3 to the domain. No luck though:
(~) net ads join -U Administrator
Enter Administrator's password:
Failed to join domain: failed to lookup DC info for domain 'AZULOGIC.COM' over rpc: NT_STATUS_INTERNAL_ERROR
How do you fix this? One way is to run in the "single" process model mode. I changed /etc/init.d/samba4 to launch the samba daemon with -M single. Then you see a nice:
(~) net ads join -U Administrator
Enter Administrator's password:
Using short domain name -- AZULOGIC
Joined 'LTS' to realm 'azulogic.com
One final note: as far as I can tell the debian version (4.0.0alpha6-GIT-7fb9007) crashes when someone tries to do a change password. So beware!



Secure Apt Repository Howto

2008-10-16T15:39:15.153-04:00

After a good bit of googling and poking around, I completed the setup of our secure apt repository here at nvizn.

Here's how you'd do it for an Ubuntu intrepid repository.

First, setup a directory tree that looks like this:
mkdir -p /var/www/packages/dists/intrepid/main/binary-i386/
mkdir -p /var/www/packages/intrepid/main
Then, install apt-ftparchive, which will do most of the heavy lifting.
apt-get install apt-ftparchive
Now, drop all your .debs into /var/www/packages/intrepid/main/ and create an apt-ftparchive configuration file at /etc/archive.config

Here's what mine looks like:
Dir {
ArchiveDir "/var/www/packages";
CacheDir "/home/joel.reed/uploads/";
};

Default {
Packages::Compress ". gzip bzip2";
Sources::Compress ". gzip bzip2";
Contents::Compress ". gzip bzip2";
};

APT::FTPArchive::Release::Codename "intrepid";
APT::FTPArchive::Release::Suite "intrepid";
APT::FTPArchive::Release::Origin "Joel W. Reed";

TreeDefault {
BinCacheDB "packages-$(SECTION)-$(ARCH).db";
Directory "intrepid/$(SECTION)";
Packages "$(DIST)/$(SECTION)/binary-$(ARCH)/Packages";
SrcDirectory "intrepid/$(SECTION)";
Sources "$(DIST)/$(SECTION)/source/Sources";
Contents "$(DIST)/Contents-$(ARCH)";
};

Tree "dists/intrepid" {
Sections "main";
Architectures "i386";
}
Finally, run this sequence of commands:
apt-ftparchive generate /etc/archive.config
cd /var/www/packages/dists/intrepid/
apt-ftparchive -c /etc/archive.config release . > Release
rm -v Release.gpg
gpg -v --output Release.gpg -ba Release
When you're done, you'll end up with a /var/www/packages tree that looks something like this:
/var/www/packages/dists/intrepid
/var/www/packages/dists/intrepid/main
/var/www/packages/dists/intrepid/main/binary-i386
/var/www/packages/dists/intrepid/main/binary-i386/Packages.gz
/var/www/packages/dists/intrepid/main/binary-i386/Packages.bz2
/var/www/packages/dists/intrepid/main/binary-i386/Packages
/var/www/packages/dists/intrepid/Contents-i386
/var/www/packages/dists/intrepid/Release
/var/www/packages/dists/intrepid/Release.gpg
/var/www/packages/dists/intrepid/Contents-i386.gz
/var/www/packages/dists/intrepid/Contents-i386.bz2
/var/www/packages/intrepid
/var/www/packages/intrepid/main
/var/www/packages/intrepid/main/alfresco-r3184-0.3.1.deb
/var/www/packages/intrepid/main/nvizn-base-0.3.6.deb
/var/www/packages/intrepid/main/libnss-cache_0.1-1_i386.deb
/var/www/packages/intrepid/main/nsscache_0.8.4.1_all.deb
/var/www/packages/intrepid/main/stratus-desktop-0.2.deb
/var/www/packages/intrepid/main/packages-main-i386.db
/var/www/packages/intrepid/main/jsetup_0.5.1_all.deb
Now, to make all this work, you need to have a gpg key of course, and apache set to serve up /var/www/packages, and all client machines need the public key. To do that with a key on a keyserver, do something like
gpg --recv-keys B1850655 && gpg --export B1850655 | apt-key add -
Hope this is helpful to you!



Startup

2008-10-13T22:27:04.695-04:00

I haven't blogged for while, because I've been putting a lot of hours into an open source startup company. It's been great fun to work with some new technologies like Groovy, Grails, CouchDB, and Samba4.

Among other things, I setup an openldap server, built a few custom www.openldap.org/lists/openldap-software/200807/msg00002.html">overlays, and integrated Zimbra, Alfresco, Openfire, SipX, Samba3, and an Ubuntu desktop. Each of these integrations has there pros and cons, perhaps Zimbra and SipX are the nicest.

I'm hoping to blog about my experience with Samba4 shortly.



OpenTF 0.6.0 Release

2008-02-04T17:20:01.971-05:00

Wow - two months without a blog post and 3 months since my last OpenTF release! For the last month or two, I really haven't worked much on OpenTF, preferring instead to work on learning NT Greek and more about the Book of Isaiah.

The latest release includes a few new goodies and many bugfixes. There's the new IRC changeset notification bot, support for CruiseControl (an open source continuous build framework), a monodevelop plugin for browsing TFS servers, and several new commands like "shelve", "rollback", and "merges".

Over the next few month, I hope to be able to further develop the monodevelop plugin, continue work on missing commands, and begin testing other open source Team Foundation tools for compatibility with the OpenTF libraries.



Job Openings

2007-11-30T13:21:45.329-05:00

If there's anyone looking for a ASP.Net developer position in the Pittsburgh (PA) area and you've contributed to the mono project in the past, please put a link to your resume in the comments for this post. We're a great company to work and are early adopters of .Net related technologies. I'd love to be able to hire folks who have helped out the mono project. Thanks!



MonoDevelop and Team Foundation

2007-11-08T23:25:01.041-05:00

Now that MonoDevelop is nearing a 1.0 release, I thought I'd take another look at fleshing out a TeamFoundation plugin for MD.

For starters, I'm taking the "tf explore" command in OpenTF, factoring out the Gtk classes into a separate assembly, then building out an MD addin that makes use of it.

Obligatory screenshot: (image)

It will take a while to clean up this code and the build machinery, and to figure out how to make better use of builtin MonoDevelop addin services, but perhaps in a release of two, we'll have something useful. If you're interested in helping, please do!

At some point, I'm also hopeful that those developing the VersionControl API for MD can consider the needs of a Team Foundation plugin. I'd be very interested in seeing if we can make something work for SVN, GIT, etc. and TFS. That'd be a much better situation.



OpenTF Build Changes

2007-10-29T21:09:36.894-04:00

I started the OpenTF project out by copying the Mono Olive tree, and replacing its assemblies and tools with my Team Foundation files. This worked well on *nix, but recently I've been trying to improve support for building on Windows as well.

Should I use cscript, nmake, powershell, BAT, project files, or some combination of these? How could I implement a build solution that didn't just duplicate the same build instructions (source files, references, etc) in 2 different formats: one for windows and one for *nix?

I decided to keep things simple on Windows - just use a VS2005 solution with a bunch of project files. Then for *nix, I decided to make libxslt's xsltproc a build requirement, and generate the list of sources and references for the mono olive make machinery using a few simple XSL stylesheets.

For example, all the .sources files are now generated via build/sources.xsl. Which looks something like this:
  





I also have .references files for each assembly, also generated via an XSL file from the .csproj.

Now, I just maintain the VS2005 project files, and leave the *nix build stuff to the stylesheets. I added support for conditional sources using the Conditional attribute. Its working quite well thus far.



Using git-svn with Mono

2007-10-02T21:20:12.526-04:00

Why use git to hack on mono?

I've found myself far more productive and make heavy use of feature branches and squashed commits for my day job, so when I hack on mono, I really enjoy being able to leverage the same capabilities.

By "squashed commits", I guess I should really say, leveraging the power of a distributed version control system that lets me break down a task into many smaller steps, commit each step individually, then squashing the whole thing down to one patch that I can post to mono-devel for review.

So do you set things up to use git with Mono?
cd /usr/local/src/
mkdir mono && cd mono
mkdir mcs && cd mcs
git-svn init svn+ssh://username@mono-cvs.ximian.com/source/trunk/mcs
git-svn fetch -r 86200 && git-svn fetch
cd ..
mkdir mono && cd mono
git-svn init svn+ssh://username@mono-cvs.ximian.com/source/trunk/mono
git-svn fetch -r 86200 && git-svn fetch
Note: the above recipe copies the svn history only back to revision 86200. You can pick any valid svn revision number you like, or if you want the full revision history see this page on the Mono wiki.

Ok. Everything's setup. Now what?

First, let's say I want to hack on some ASP.NET ashx page bug. I'll setup a local branch "ashx" to store whatever code I write/change:
git-checkout -b ashx
Now I have two branches: "master" which was setup by git-svn above, and "ashx" which I just created and switched over to. Now, I can:
emacs -nw class/System.Web/...
git-commit -a -m "1st step"
emacs -nw class/System.Web/...
git-commit -a -m "2nd step"
emacs -nw class/System.Web/...
git-commit -a -m "3rd step"
Ok, now to post a message to mono-devel:
git-diff master ashx > ~/Bug6884.fix
mutt
When everything looks good and no one has any complaints, I can finally commit back to mono's svn repository with:
git-branch master
git-pull --squash --summary . ashx
git-commit -a -m "message for mono's svn"
git-svn dcommit
This "squashes" my commits down into one batch of changes on the master branch, which I then commit and push to svn repo.

Finally, How do I update my local tree?
git-svn fetch && git-svn rebase remotes/git-svn
Note: This will update the current branch you are on locally.

Hope someone finds this helpful!



test RPM for tf4mono

2007-09-20T23:52:25.953-04:00

I just caught up on my reading of the mono mailing list and saw Miguel's post about Mono Packaged .NET apps for Mono.

Since I have debian packages and a win32 installer for tf4mono, I thought it might be time to make an RPM package as well and maybe help this QA effort.

Anyway, I downloaded the very helpful Mono 1.2.5 VMWare image and went to work on creating a spec file for rpmbuild. Side note: cleaning out the bash history and ~/.ssh might be a sensible improvement to this image.

I had to try and remember all the old rpm command line options I used to use in my sleep - as I fell in love with debian's apt-get several years ago and forgot most rpm incantations.

Anyway, here's the resultant RPM package. By the way, I enabled the optional gtksourceview-sharp based syntax highlighting in the package.

If anyone can review the tfs.spec.in file or the RPM file and offer suggestions for improvement, please do so. I'll gladly make a necessary cleanups.

I installed the package and ran "tf show build" and "tf show stats /server:my.tfs.server.ip" and "tf explore /server:my.tfs.server.ip" and everything seemed in order. The "tf show" commands are new in the soon to be released 0.5.2 version of tf4mono.



Monthly Sleep Deprivation

2007-09-07T23:21:14.662-04:00

Oddly enough, I've seem to have fallen into a schedule of releasing updates
to tf4mono about once a month. This month's release is
tf4mono 0.5.1
which includes win32 installation packages, a GTK-based gui mode
for exploring TFS repositories, many command enhancements, improved builtin help with
usage guidelines, and numerous bugfixes.

I always need a bit of downtime after a release first of course -
working on open source software as a hobby is fun,
but always ends up meaning lost sleep every so often.

Anyway, I'm interested in hearing what features would make tf4mono
more useful to you. Better support for locking files? Handling merge conflicts?
Easier building on win32 platforms? More GUI support? Let me know!

The master/trunk branch of tf4mono just got a "stats" command which makes use of
/VersionControl/v1.0/administration.asmx to generate some server statistics.

Here's some sample output:
(~/Source/tfs-lsg-1.0) tf stats
Files: 812421
Folders: 20033
Groups: 481
Pending Changes: 7907
Shelvesets: 180
Users: 184
Workspaces: 154
I plan on augmenting this output a bit, but its a good start.

The most nagging issue for me is actually a NTLM bug in mono that I keep hoping someone will eventually fix. It could be bug
#80687, though I'm not sure of it. On windows boxen, tf4mono never gives occasional auth failures, but on mono it does - especially on a fast network.

By accident, I noticed that I never saw any auth failures when working from home
over a VPN, but at work I'd see the auth failures quite regularly. If I route my
TFS traffic at work thru my home machine, thru the VPN, and back to work I never see auth failures. So it seems the faster the network the more likely you are to see this NTLM bug in mono.



tf4mono for windows

2007-08-30T16:15:36.658-04:00

Thanks to Nullsoft Scriptable Install System, I've created some win32 installation packages for tf4mono.

There are two install options.

The first, tf4mono-base-0.5.1-rc1.exe, has been compiled without any GUI code. It has no external dependencies and should run on any win32 box with the .Net 2.0 framework installed.

The second package, tf4mono-full-0.5.1-rc1.exe, includes the graphical TF explore command. To run this version on win32, you must first install the Gtk# Installer for Windows.

Neither package adds the tf4mono installation folder to the SYSTEM or USER path. You'll have to do this by hand for now.

If you're on a windows box and have a few minutes to test out the package, please do so. Any feedback on how they work would be awesome.