Subscribe: Planet Ubuntu
http://planet.ubuntulinux.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
ansible  code  content  create  kde frameworks  kde  lxd  new  open source  open  package  people  snap  time  ubuntu  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Ubuntu

Planet Ubuntu



Planet Ubuntu - http://planet.ubuntu.com/



 



Ubuntu Insights: Webinar: Running an enterprise-grade OpenStack without the headaches

Wed, 07 Dec 2016 17:25:39 +0000

(image)

Watch Webinar On-demand

In Canonical’s latest on-demand webinar, we explore how to build and manage highly scalable OpenStack clouds with BootStack on a Supermicro reference architecture.

Supermicro offers end-to-end green computing solutions for the data center and enterprise IT. Its solutions range from server, storage, blade, workstations, networking devices, server management software and technology support and services, making them an ideal choice for Bootstack deployments.

With BootStack, Canonical experts build, operate and optionally transfer a customer’s OpenStack cloud either on-premises or in a hosted environment. Using a hyper-converged solution stack that has been tested & validated in Supermicro labs, customers can choose a best-in-class hardware platform with OIL validation, the leading production OS for OpenStack deployments and networking overlay delivered as a fully managed service. Using Juju, the application and service modelling tool, BootStack customers can easily integrate the infrastructure and operations they need.

Join Arturo Suarez and Akash Chandrashekar from Canonical, and Srini Bala from Supermicro as they explore a rich landscape of opportunities that combines Juju on Supermicro’s certified platforms to help you tackle the challenge of building and maintaining of complex microservices-based solutions like OpenStack and Kubernetes.

Watch On-Demand




Ubuntu Insights: Jamming with Ubuntu Core

Wed, 07 Dec 2016 16:42:54 +0000

We celebrated the launch of Ubuntu Core 16 hackathon, or shall we say Snapathon, that took place in Shenzhen between 26-27th November. The 30+ hour gathering and coding session was attended by developers, makers and anyone that was interested in the Internet of Things and the technology that powers them. Attendees were from different backgrounds, industries and places, but demonstrated the same passion and interest to learn about Ubuntu Core and be part of the evolution of the Internet of Things.   From smart homes to drones, robots and industrial systems, Ubuntu Core provides robust security, app stores and reliable updates. Ubuntu makes development easy…and snap packages make Ubuntu Core secure and reliable for widely distributed devices. This hackathon demonstrated how easy it is to package a snap and work with Ubuntu Core. We kicked-off with a tech talk on Ubuntu core and then the hacking session took-place… 10 teams on site 29 hours of non-stop coding 6 different types of hardwares / dev boards and sensors (RaspberryPi 3, QualComm Dragon boards, LeMaker HiKey 96boards, Intel NUCs, Dell Gateway, and Pine A64 boards ) And 7 snaps were born! 1、snap: water-iot-service: by Jarvis Chung and Lucas Lu This project is an application that can help  monitor and test water quality and status under different environments, especially when it is difficult or dangerous for direct human access. It utilizes RaspberryPi 3 and a few sensors to gather information data, which will be remotely sent to Qnap’s NAS systems for data analysis. Result can be accessed through a web interface. The team who worked on this project were from QNAP System. More information on QNAP and their solutions can be found here. 2、Project Cooltools, snap: sensor-gw: by Hao jianlin This project used TI sensor tag to collect the location’s light condition, and accordingly the snap application can auto adjusts smart bulb’s lighting to achieve an optimized lighting ambience. The project is powered by Ubuntu Core and running on a QualComm dragonboard. A useful addition to your smart home solution! The team behind this project is from Shenzhen CoolTools, a startup company focusing on developing smart IoT solutions and applications. 3、snap: crazy-app by Crazyou Crazy-app was developed by Crazyou, a startup robotics company based in Shenzhen. Their crazy-app snap application provides remote monitoring, remote control and admin ability for their robots….as well as remote access to their robot’s webcam to capture surrounding images! More information about Crazyou and their robots can be find here.  4、snap: Simcaffe by Lao Liang Running on QualComm DragonBoard 410c, the project is powered by Ubuntu core and comes with an AI developed by using Caffe deep learning framework which can be trained to recognize different images. The project was designed to be utilized in smart surveillance systems. The project code is available on Github. 5、snap: Sutop by team PCBA Sutop is a simple yet handy system admin tool which can monitor and manage your device system remotely. 6、My wardrobe by Li Jiancheng Powered by ubuntu core, and running on Rasberry Pi –  it’s a simple snap that can store all your clothes images and help to organize all of them to provide matching options when you need some help with getting stylish. 7、Project Cellboot by Shen Jianfeng It is a cluster snap that can utilize all connected ubuntu core devices to performance cluster data computing and analysis tasks. Besides the above projects there were a couple more developed during the hackathon, however with time limitation, they didn’t come to life for  demo stage – though at some point we’re looking forward to seeing them soon in the store! It was indeed a long night in Shenzhen but the amount of ideas and innovation that came out of it was amazing. Until next time…. ! [...]



Dustin Kirkland: A Touch of Class at Sir Ludovic, Bucharest, Romania

Wed, 07 Dec 2016 16:00:48 +0000

A few weeks ago, I traveled to Bucharest, Romania for a busy week of work, planning the Ubuntu 17.04 (Zesty) cycle.I did have a Saturday and Sunday to myself, which I spent mostly walking around the beautiful, old city. After visiting the Romanian Athenaeum, I quite randomly stumbled into one truly unique experience. I passed a window shop for "Sir Ludovic Master Suit Maker" which somehow caught my eye.I travel quite a bit on business, and you'll typically find me wearing a casual sports coat, a button up shirt, nice jeans, cowboy boots, and sometimes cuff links. But occasionally, I feel a little under-dressed, especially in New York City, where a dashing suit still rules the room.Frankly, everything I know about style and fashion I learned from Derek Zoolander. Just kidding. Mostly.Anyway, I owned two suits. One that I bought in 2004, for that post-college streak of weddings, and a seersucker suit (which is dandy in New Orleans and Austin, but a bit irreverent for serious client meetings on Wall Street).So I stepped into Sir Ludovic, merely as a curiosity, and walked out with the most rewarding experience of my week in Romania. Augustin Ladar, the master tailor and proprietor of the shop, greeted me at the door. We then spent the better part of 3 hours, selecting every detail, from the fabrics, to the buttons, to the stylistic differences in the cut and the fit.Better yet, I absorbed a wealth of knowledge on style and fashion: when to wear blue and when to wear grey, why some people wear pin stripes and others wear checks, authoritative versus friendly style, European versus American versus Asian cuts, what the heck herringbone is, how to tell if the other guy is also wearing hand tailored attire, and so on...Augustin measured me for two custom tailored suits and two bespoke shirts, on a Saturday. I picked them up 6 days later on a Friday afternoon (paying a rush service fee).Wow. Simply, wow. Splendid Italian wool fabric, superb buttons, eye-catching color shifting inner linings, and an impeccably precise fit.I'm headed to New York for my third trip since, and I've never felt more comfortable and confident in these graceful, classy suits. A belated thanks to Augustin. Fabulous work!Cheers,Dustin[...]



Stéphane Graber: Running snaps in LXD containers

Wed, 07 Dec 2016 14:37:24 +0000

Introduction The LXD and AppArmor teams have been working to support loading AppArmor policies inside LXD containers for a while. This support which finally landed in the latest Ubuntu kernels now makes it possible to install snap packages. Snap packages are a new way of distributing software, directly from the upstream and with a number of security features wrapped around them so that these packages can’t interfere with each other or cause harm to your system. Requirements There are a lot of moving pieces to get all of this working. The initial enablement was done on Ubuntu 16.10 with Ubuntu 16.10 containers, but all the needed bits are now progressively being pushed as updates to Ubuntu 16.04 LTS. The easiest way to get this to work is with: Ubuntu 16.10 host Stock Ubuntu kernel (4.8.0) Stock LXD (2.4.1 or higher) Ubuntu 16.10 container with “squashfuse” manually installed in it Installing the nextcloud snap First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it. lxc launch ubuntu:16.10 nextcloud lxc exec nextcloud -- apt update lxc exec nextcloud -- apt dist-upgrade -y lxc exec nextcloud -- apt install squashfuse -y And then, lets install that “nextcloud” snap with: lxc exec nextcloud -- snap install nextcloud Finally, grab the container’s IP and access “http://” with your web browser: stgraber@castiana:~$ lxc list nextcloud +-----------+---------+----------------------+----------------------------------------------+ | NAME | STATE | IPV4 | IPV6 | +-----------+---------+----------------------+----------------------------------------------+ | nextcloud | RUNNING | 10.148.195.47 (eth0) | fd42:ee2:5d34:25c6:216:3eff:fe86:4a49 (eth0) | +-----------+---------+----------------------+----------------------------------------------+ Installing the LXD snap in a LXD container First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it. This time with support for nested containers. lxc launch ubuntu:16.10 lxd -c security.nesting=true lxc exec lxd -- apt update lxc exec lxd -- apt dist-upgrade -y lxc exec lxd -- apt install squashfuse -y Now lets clear the LXD that came pre-installed with the container so we can replace it by the snap. lxc exec lxd -- apt remove --purge lxd lxd-client -y Because we already have a stable LXD on the host, we’ll make things a bit more interesting by installing the latest build from git master rather than the latest stable release: lxc exec lxd -- snap install lxd --edge The rest is business as usual for a LXD user: stgraber@castiana:~$ lxc exec lxd bash root@lxd:~# lxd init Name of the storage backend to use (dir or zfs) [default=dir]: We detected that you are running inside an unprivileged container. This means that unless you manually configured your host otherwise, you will not have enough uid and gid to allocate to your containers. LXD can re-use your container's own allocation to avoid the problem. Doing so makes your nested containers slightly less safe as they could in theory attack their parent container and gain more privileges than they otherwise would. Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? Would you like LXD to be available over the network (yes/no) [default=no]? Would you like stale cached images to be updated automatically (yes/no) [default=yes]? Would you like to create a new network bridge (yes/no) [default=yes]? What should the new bridge be called [default=lxdbr0]? What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]? What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]? LXD has been successfully configured. root@lxd:~# lxd.lxc launch images:archlinux arch If this is your first time using LXD, you should also run: sudo lxd init To start your first container, try: lxc launch ubuntu:16.04 Creating arch Start[...]



The Fridge: Ubuntu Weekly Newsletter Issue 490

Tue, 06 Dec 2016 04:17:22 +0000

(image)

Welcome to the Ubuntu Weekly Newsletter. This is issue #490 for the week November 28 – December 4, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Chris Guiver
  • Elizabeth K. Joseph
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

(image) Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License




Seif Lotfy: Playing with .NET (dotnet) and IronFunctions

Mon, 05 Dec 2016 22:15:39 +0000

Again if you missed it, IronFunctions is open-source, lambda compatible, on-premise, language agnostic, server-less compute service. While AWS Lambda only supports Java, Python and Node, Iron Functions allows you to use any language you desire by running your code in containers. With Microsoft being one of the biggest players in open source and .NET going cross-platform it was only right to add support for it in the IronFunctions's fn tool. TL;DR: The following demos a .NET function that takes in a URL for an image and generates a MD5 checksum hash for it: Using dotnet with functions Make sure you downloaded and installed dotnet. Now create an empty dotnet project in the directory of your function: dotnet new By default dotnet creates a Program.cs file with a main method. To make it work with IronFunction's fn tool please rename it to func.cs. mv Program.cs func.cs Now change the code as you desire to do whatever magic you need it to do. In our case the code takes in a URL for an image and generates a MD5 checksum hash for it. The code is the following: using System; using System.Text; using System.Security.Cryptography; using System.IO; namespace ConsoleApplication { public class Program { public static void Main(string[] args) { // if nothing is being piped in, then exit if (!IsPipedInput()) return; var input = Console.In.ReadToEnd(); var stream = DownloadRemoteImageFile(input); var hash = CreateChecksum(stream); Console.WriteLine(hash); } private static bool IsPipedInput() { try { bool isKey = Console.KeyAvailable; return false; } catch { return true; } } private static byte[] DownloadRemoteImageFile(string uri) { var request = System.Net.WebRequest.CreateHttp(uri); var response = request.GetResponseAsync().Result; var stream = response.GetResponseStream(); using (MemoryStream ms = new MemoryStream()) { stream.CopyTo(ms); return ms.ToArray(); } } private static string CreateChecksum(byte[] stream) { using (var md5 = MD5.Create()) { var hash = md5.ComputeHash(stream); var sBuilder = new StringBuilder(); // Loop through each byte of the hashed data // and format each one as a hexadecimal string. for (int i = 0; i < hash.Length; i++) { sBuilder.Append(hash[i].ToString("x2")); } // Return the hexadecimal string. return sBuilder.ToString(); } } } } Note: IO with an IronFunction is done via stdin and stdout. This code Using with IronFunctions Let's first init our code to become IronFunctions deployable: fn init / Since IronFunctions relies on Docker to work (we will add rkt support soon) the is required to publish to docker hub. The is the identifier of the function. In our case we will use dotnethash as the , so the command will look like: fn init seiflotfy/dotnethash When running the command it will create the func.yaml file required by functions, which can be built by running: Push to docker fn push This will create a docker image and push the image to docker. Publishing to IronFunctions To publish to IronFunctions run ... fn routes create where is (no surprise here) the name of the app, which can encompass many functions. This creates a full path in the form of http://:/r/&[...]



Harald Sitter: KDE Framworks 5 Content Snap Techno

Mon, 05 Dec 2016 16:10:14 +0000

In the previous post on Snapping KDE Applications we looked at the high-level implication and use of the KDE Frameworks 5 content snap to snapcraft snap bundles for binary distribution. Today I want to get a bit more technical and look at the actual building and inner workings of the content snap itself. The KDE Frameworks 5 snap is a content snap. Content snaps are really just ordinary snaps that define a content interface. Namely, they expose part or all of their file tree for use by another snap but otherwise can be regular snaps and have their own applications etc. KDE Frameworks 5’s snap is special in terms of size and scope. The whole set of KDE Frameworks 5, combined with Qt 5, combined with a large chunk of the graphic stack that is not part of the ubuntu-core snap. All in all just for the Qt5 and KF5 parts we are talking about close to 100 distinct source tarballs that need building to compose the full frameworks stack. KDE is in the fortunate position of already having builds of all these available through KDE neon. This allows us to simply repack existing work into the content snap. This is for the most part just as good as doing everything from scratch, but has the advantage of saving both maintenance effort and build resources. I do love automation, so the content snap is built by some rather stringy proof of concept code that automatically translates the needed sources into a working snapcraft.yaml that repacks the relevant KDE neon debs into the content snap. Looking at this snapcraft.yaml we’ll find some fancy stuff. After the regular snap attributes the actual content-interface is defined. It’s fairly straight forward and simply exposes the entire snap tree as kde-frameworks-5-all content. This is then used on the application snap side to find a suitable content snap so it can access the exposed content (i.e. in our case the entire file tree). slots: kde-frameworks-5-slot: content: kde-frameworks-5-all interface: content read: - "." The parts of the snap itself are where the most interesting things happen. To make things easier to read and follow I’ll only show the relevant excerpts. The content snap consists of the following parts: kf5, kf5-dev, breeze, plasma-integration. The kf5 part is the meat of the snap. It tells snapcraft to stage the binary runtime packages of KDE Frameworks 5 and Qt 5. This effectively makes snapcraft pack the named debs along with necessary dependencies into our snap. kf5: plugin: nil stage-packages: - libkf5coreaddons5 ... The kf5-dev part looks almost like the kf5 part but has entirely different functionality. Instead of staging the runtime packages it stages the buildtime packages (i.e. the -dev packages). It additionally has a tricky snap rule which excludes everything from actually ending up in the snap. This is a very cool tricky, this effectively means that the buildtime packages will be in the stage and we can build other parts against them, but we won’t have any of them end up in the final snap. After all, they would be entirely useless there. kf5-dev: after: - kf5 plugin: nil stage-packages: - libkf5coreaddons-dev .... snap: - "-*" Besides those two we also build two runtime integration parts entirely from scratch breeze and plasma-integration. They aren’t actually needed, but ensure sane functionality in terms of icon theme selection etc. These are ordinary build parts that simply rely on the kf5 and kf5-dev parts to provide the necessary dependencies. An important question to ask here is how one is meant to build against this now. There is this kf5-dev part, but it does not end up in the final snap where it would be entirely useless anyway as snaps are not used at buildtime. The answer lies in one of the rigging scripts around this. In the snap[...]



Rafael Carreras: Yakkety Yak release parties

Mon, 05 Dec 2016 13:44:39 +0000

(image)

Catalan LoCo Team celebrated on November 5th a release party of the next Ubuntu version, in that case, 16.10 Xenial Xerus, in Ripoll, such a historical place. As always, we started explaining what Ubuntu is and how it adapts to new times and devices.

(image)

FreeCad 3D design and Games were both present at the party.

(image)

(image)

A few weeks later, in December 3rd, we did another release party, this time in Barcelona.

(image)

We went to Soko, previously a chocolate factory, that nowadays is a kind of Makers Lab, very excited about free software. First, Josep explained the current developments in Ubuntu and we carried some installations on laptops.

(image)

We ate some pizza and had discussions about free software on public administrations. Apart from the usual users who came to install Ubuntu on their computers, the responsible from Soko gave us 10 laptops for Ubuntu installation too. We ended the tasks installing Wine for some Lego to run.

(image)

(image)

(image)

That’s some art that is being made at Soko.

I’m releasing that post because we need some documentation on release parties. If you need some advice on how to manage a release party, you can contact me or anyone in Ubuntu community.

 




Svetlana Belkin: Community Service Learning Within Open * Communities

Sun, 04 Dec 2016 21:35:33 +0000

As the name implies, “service-learning is an educational approach that combines learning objectives with community service in order to provide a pragmatic, progressive learning experience while meeting societal needs” (Wikipedia).  When you add the “community” part to that definition it changes to, “about leadership development as well as traditional information and skill acquisition” (Janet 1999).

How does this apply to Open * communities?

Simple!  Community service learning is an ideal way to get middle/high school/college students to get involved within the various communities and understand the power of Open *. And also to stay active after their term of community service learning.

This idea came to me just today (as of writing, Nov. 30th) as a thought on what is really Open *.  Not the straightforward definition of it but the the affect Open * creates.  As I stated on my home page of my site, Open * creates a sense of empowerment.  One way is through the actions that create skills and improvements to those skills.  Which skills are those?  Mozilla Learning made a map and description to these skills on their Web Literacy pages.  They are show below also:

(image) Most of these skills along with the ways to gain these skills (read, write, participate) can be used as skills to worked on for community service learning.

As stated above, community service learning is really the focus of gaining skills and leadership skills while (in the Open * sense) contribute to projects that impacts the society of the world.  This is really needed now as there are many local and world issues that Open * can provide solutions too.

I see this as an outreach program for schools and the various organizations/groups such as Ubuntu, System76, Mozilla, and even Linux Padawan.  Unlike Google Summer of Code (GSoC), no one receives a stipend but the idea of having a mentor could be taken from GSoC.  No, not could but should.  Because the student needs someone to guide them, hence Linux Padawan could benefit from this idea.

Having that said, I will try to work out a sample program that could be used and maybe test it with Linux Padawan.  Maybe I could have this ready by spring semester.

Random Fact #1: Simon Quigley, through his middle school, is in a way already doing this type of learning.

Random Fact #2: At one point of time, I wanted to translate that Web Literacy map into one that can be applied to Open *, not just one topic.




Jo Shields: A quick introduction to Flatpak

Sun, 04 Dec 2016 10:44:34 +0000

Releasing ISV applications on Linux is often hard. The ABI of all the libraries you need changes seemingly weekly. Hence you have the option of bundling the world, or building a thousand releases to cover a thousand distribution versions. As a case in point, when MonoDevelop started bundling a C Git library instead of using a C# git implementation, it gained dependencies on all sorts of fairly weak ABI libraries whose exact ABI mix was not consistent across any given pair of distro releases. This broke our policy of releasing “works on anything” .deb and .rpm packages. As a result, I pretty much gave up on packaging MonoDevelop upstream with version 5.10. Around the 6.1 release window, I decided to take re-evaluate question. I took a closer look at some of the fancy-pants new distribution methods that get a lot of coverage in the Linux press: Snap, AppImage, and Flatpak. I started with AppImage. It’s very good and appealing for its specialist areas (no external requirements for end users), but it’s kinda useless at solving some of our big areas (the ABI-vs-bundling problem, updating in general). Next, I looked at Flatpak (once xdg-app). I liked the concept a whole lot. There’s a simple 3-tier dependency hierarchy: Applications, Runtimes, and Extensions. An application depends on exactly one runtime.  Runtimes are root-level images with no dependencies of their own. Extensions are optional add-ons for applications. Anything not provided in your target runtime, you bundle. And an integrated updates mechanism allows for multiple branches and multiple releases parallel-installed (e.g. alpha & stable, easily switched). There’s also security-related sandboxing features, but my main concerns on a first examination were with the dependency and distribution questions. That said, some users might be happier running Microsoft software on their Linux desktop if that software is locked up inside a sandbox, so I’ve decided to embrace that functionality rather than seek to avoid it. I basically stopped looking at this point (sorry Snap!). Flatpak provided me with all the functionality I wanted, with an extremely helpful and responsive upstream. I got to work on trying to package up MonoDevelop. Flatpak (optionally!) uses a JSON manifest for building stuff. Because Mono is still largely stuck in a Gtk+2 world, I opted for the simplest runtime, org.freedesktop.Runtime, and bundled stuff like Gtk+ into the application itself. Some gentle patching here & there resulted in this repository. Every time I came up with an exciting new edge case, upstream would suggest a workaround within hours – or failing that, added new features to Flatpak just to support my needs (e.g. allowing /dev/kvm to optionally pass through the sandbox). The end result is, as of the upcoming 0.8.0 release of Flatpak, from a clean install of the flatpak package to having a working MonoDevelop is a single command: flatpak install --user --from https://download.mono-project.com/repo/monodevelop.flatpakref  For the current 0.6.x versions of Flatpak, the user also needs to flatpak remote-add --user --from gnome https://sdk.gnome.org/gnome.flatpakrepo first – this step will be automated in 0.8.0. This will download org.freedesktop.Runtime, then com.xamarin.MonoDevelop; export icons ‘n’ stuff into your user environment so you can just click to start. There’s some lingering experience issues due the sandbox which are on my radar. “Run on external console” doesn’t work, for example, or “open containing folder”. There are people working on that (a missing DBus# feature to allow breaking out of the sandbox). But overall, I’m pretty happy. I won’t be entirely satisfied until I have something approximating feature equivalence to the old .debs.  I don’t think that will ever quite be ther[...]



Ross Gammon: My Open Source Contributions June – November 2016

Sat, 03 Dec 2016 11:52:02 +0000

So much for my monthly blogging! Here’s what I have been up to in the Open Source world over the last 6 months. Debian Uploaded a new version of the debian-multimedia blends metapackages Uploaded the latest abcmidi Uploaded the latest node-process-nextick-args Prepared version 1.0.2 of libdrumstick for experimental, as a first step for the transition. It was sponsored by James Cowgill. Prepared a new node-inline-source-map package, which was sponsored by Gianfranco Costamagna. Uploaded kmetronome to experimental as part of the libdrumstick transition. Prepared a new node-js-yaml package, which was sponsored by Gianfranco Costamagna. Uploaded version 4.2.4 of Gramps. Prepared a new version of vmpk which I am going to adopt, as part of the libdrumstick transition. I tried splitting the documentation into a separate package, but this proved difficult, and in the end I missed the transition freeze deadline for Debian Stretch. Prepared a backport of Gramps 4.2.4, which was sponsored by IOhannes m zmölnig as Gramps is new for jessie-backports. Began a final push to get kosmtik packaged and into the NEW queue before the impending Debian freeze for Stretch. Unfortunately, many dependencies need updating, which also depend on packages not yet in Debian. Also pushed to finish all the new packages for node-tape, which someone else has decided to take responsibility for. Uploaded node-cross-spawn-async to fix a Release Critical bug. Prepared  a new node-chroma-js package,  but this is unfortunately blocked by several out of date & missing dependencies. Prepared a new node-husl package, which was sponsored by Gianfranco Costamagna. Prepared a new node-resumer package, which was sponsored by Gianfranco Costamagna. Prepared a new node-object-inspect package, which was sponsored by Gianfranco Costamagna. Removed node-string-decoder from the archive, as it was broken and turned out not to be needed anymore. Uploaded a fix for node-inline-source-map which was failing tests. This turned out to be due to node-tap being upgraded to version 8.0.0. Jérémy Lal very quickly provided a fix in the form of a Pull Request upstream, so I was able to apply the same patch in Debian. Ubuntu Prepared a merge of the latest blends package from Debian in order to be able to merge the multimedia-blends package later. This was sponsored by Daniel Holbach. Prepared an application to become an Ubuntu Contributing Developer. Unfortunately, this was later declined. I was completely unprepared for the Developer Membership Board meeting on IRC after my holiday. I had had no time to chase for endorsements from previous sponsors, and the application was not really clear about the fact that I was not actually applying for upload permission yet. No matter, I intend to apply again later once I have more evidence & support on my application page. Added my blog to Planet Ubuntu, and this will hopefully be the first post that appears there. Prepared a merge of the latest debian-multimedia blends meta-package package from Debian. In Ubuntu Studio, we have the multimedia-puredata package seeded so that we get all the latest Puredata packages in one go. This was sponsored by Michael Terry. Prepared a backport of Ardour as part of the Ubuntu Studio plan to do regular backports. This is still waiting for sponsorship if there is anyone reading this that can help with that. Did a tweak to the Ubuntu Studio seeds and prepared an update of the Ubuntu Studio meta-packages. However, Adam Conrad did the work anyway as part of his cross-flavour release work without noticing my bug & request for sponsorship. So I closed the bug. Updated the Ubuntu Studio wiki to expand on the process for updating our seeds and meta-packages. Hopefully, this will help new contributors to get involved in this area[...]



Timo Aaltonen: Mesa 12.0.4 backport available for testing

Fri, 02 Dec 2016 22:28:34 +0000

Hi!

I’ve uploaded Mesa 12.0.4 for xenial and yakkety to my testing PPA for you to try out. 16.04 shipped with 11.2.0 so it’s a slightly bigger update there, while yakkety is already on 12.0.3 but the new version should give radeon users a 15% performance boost in certain games with complex shaders.

Please give it a spin and report to the (yakkety) SRU bug if it works or not, and mention the GPU you tested with. At least Intel Skylake seems to still work fine here.

 


(image) (image)



Costales: Fairphone 2 & Ubuntu

Fri, 02 Dec 2016 17:54:54 +0000

En la Ubucon Europe pude conocer de primera mano los avances de Ubuntu Touch en el Fairphone 2.Ubuntu Touch & Fairphone 2El Fairphone 2 es un móvil único. Como su propio nombre indica, es un móvil ético con el mundo. No usa mano de obra infantil, construido con minerales por los que no corrió la sangre y que incluso se preocupa por los residuos que genera.Delantera y traseraEn el apartado de software ejecuta varios sistemas operativos, y por fin, Ubuntu es uno de ellos.Tu elecciónEl port de Ubuntu está implementado por el proyecto UBPorts, que está avanzando a pasos de gigante cada semana.Cuando yo probé el móvil, me sorprendió la velocidad de Unity, similar a la de mi BQ E4.5.La cámara es suficientemente buena. Y la duración de la batería es aceptable.Me encantó especialmente la calidad de la pantalla, con sólo mirarla se nota su nitidez.Respecto a las aplicaciones, probé varias de la Store sin problema.CarcasaEn resumen, un gran sistema operativo, para un gran móvil :) Un win:winSi te interesa colaborar como desarrollador de este port, te recomiendo este grupo de Telegram: https://telegram.me/joinchat/AI_ukwlaB6KCsteHcXD0jwAll images are CC BY-SA 2.0.[...]



Harald Sitter: Snapping KDE Applications

Fri, 02 Dec 2016 14:44:29 +0000

This is largely based on a presentation I gave a couple of weeks ago. If you are too lazy to read, go watch it instead For 20 years KDE has been building free software for the world. As part of this endeavor, we created a collection of libraries to assist in high-quality C++ software development as well as building highly integrated graphic applications on any operating system. We call them the KDE Frameworks. With the recent advance of software bundling systems such as Snapcraft and Flatpak, KDE software maintainers are however a bit on the spot. As our software is building on such a vast collection of frameworks and supporting technology, the individual size of a distributable application can be quite abysmal. When we tried to package our calculator KCalc as a snap bundle, we found that even a relatively simple application like this, makes for a good 70 MiB snap to be in a working state (most of this is the graphical stack required by our underlying C++ framework, Qt). Since then a lot of effort was put into devising a system that would allow us to more efficiently deal with this. We now have a reasonably suitable solution on the table. The KDE Frameworks 5 content snap. A content snap is a special bundle meant to be mounted into other bundles for the purpose of sharing its content. This allows us to share a common core of libraries and other content across all applications, making the individual applications just as big as they need to be. KCalc is only 312 KiB without translations. The best thing is that beside some boilerplate definitions, the snapcraft.yaml file defining how to snap the application is like a regular snapcraft file. Let’s look at how this works by example of KAlgebra, a calculator and mathematical function plotter: Any snapcraft.yaml has some global attributes we’ll want to set for the snap name: kalgebra version: 16.08.2 summary: ((TBD)) description: ((TBD)) confinement: strict grade: devel We’ll want to define an application as well. This essentially allows snapd to expose and invoke our application properly. For the purpose of content sharing we will use a special start wrapper called kf5-launch that allows us to use the content shared Qt and KDE Frameworks. Except for the actual application/binary name this is fairly boilerplate stuff you can use for pretty much all KDE applications. apps: kalgebra: command: kf5-launch kalgebra plugs: - kde-frameworks-5-plug # content share itself - home # give us a dir in the user home - x11 # we run with xcb Qt platform for now - opengl # Qt/QML uses opengl - network # gethotnewstuff needs network IO - network-bind # gethotnewstuff needs network IO - unity7 # notifications - pulseaudio # sound notifications To access the KDE Frameworks 5 content share we’ll then want to define a plug our application can use to access the content. This is always the same for all applications. plugs: kde-frameworks-5-plug: interface: content content: kde-frameworks-5-all default-provider: kde-frameworks-5 target: kf5 Once we got all that out of the way we can move on to actually defining the parts that make up our snap. For the most part parts are build instructions for the application and its dependencies. With content shares there are two boilerplate parts you want to define. The development tarball is essentially a fully built kde frameworks tree including development headers and cmake configs. The tarball is packed by the same tech that builds the actual content share, so this allows you to build against the correct versions of the latest share. kde-frameworks-5-dev: plugin: dump snap: [-*] source: http://build.neon.kde.org/jo[...]



Raphaël Hertzog: My Free Software Activities in November 2016

Fri, 02 Dec 2016 11:45:13 +0000

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me. Debian LTS In the 11 hours of (paid) work I had to do, I managed to release DLA-716-1 aka tiff 4.0.2-6+deb7u8 fixing CVE-2016-9273, CVE-2016-9297 and CVE-2016-9532. It looks like this package is currently getting new CVE every month. Then I spent quite some time to review all the entries in dla-needed.txt. I wanted to get rid of some misleading/no longer applicable comments and at the same time help Olaf who was doing LTS frontdesk work for the first time. I ended up tagging quite a few issues as no-dsa (meaning that we will do nothing for them as they are not serious enough) such as those affecting dwarfutils, dokuwiki, irssi. I dropped libass since the open CVE is disputed and was triaged as unimportant. While doing this, I fixed a bug in the bin/review-update-needed script that we use to identify entries that have not made any progress lately. Then I claimed libgc and and released DLA-721-1 aka libgc 1:7.1-9.1+deb7u1 fixing CVE-2016-9427. The patch was large and had to be manually backported as it was not applying cleanly. The last thing I did was to test a new imagemagick and review the update prepared by Roberto. pkg-security work The pkg-security team is continuing its good work: I sponsored patator to get rid of a useless dependency on pycryptopp which was going to be removed from testing due to #841581. After looking at that bug, it turns out the bug was fixed in libcrypto++ 5.6.4-3 and I thus closed it. I sponsored many uploads: polenum, acccheck, sucrack (minor updates), bbqsql (new package imported from Kali). A bit later I fixed some issues in the bbsql package that had been rejected from NEW. I managed a few RC bugs related to the openssl 1.1 transition: I adopted sslsniff in the team and fixed #828557 by build-depending on libssl1.0-dev after having opened the proper upstream ticket. I did the same for ncrack and #844303 (upstream ticket here). Someone else took care of samdump2 but I still adopted the package in the pkg-security team as it is a security relevant package. I also made an NMU for axel and #829452 (it’s not pkg-security related but we still use it in Kali). Misc Debian work Django. I participated in the discussion about a change letting Django count the number of developers that use it. Such a change has privacy implications and the discussion sparked quite some interest both in Debian mailing lists and up to LWN. On a more technical level, I uploaded version 1.8.16-1~bpo8+1 to jessie-backports (security release) and I fixed RC bug #844139 by backporting two upstream commits. This led to the 1.10.3-2 upload. I ensured that this was fixed in the 1.10.x upstream branch too. dpkg and merged /usr. While reading debian-devel, I discovered dpkg bug #843073 that was threatening the merged-/usr feature. Since the bug was in code that I wrote a few years ago, and since Guillem was not interested in fixing it, I spent an hour to craft a relatively clean patch that Guillem could apply. Unfortunately, Guillem did not yet manage to pull out a new dpkg release with the patches applied. Hopefully it won’t be too long until this happens. Debian Live. I closed #844332 which was a request to remove live-build from Debian. While it was marked as orphaned, I was always keeping an eye on it and have been pushing small fixes to git. This time I decided to officially adopt the package within the debian-live team and work a bit [...]



Ubuntu Podcast from the UK LoCo: S09E40 – Dirty Dan’s Dark Delight - Ubuntu Podcast

Thu, 01 Dec 2016 15:00:14 +0000

It’s Season Nine Episode Forty of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Dan Kermac are connected and speaking to your brain. The same line up as last week are here again for another episode. In this week’s show: We discuss what we’ve been upto recently: Telling people to use Opera and joining ORG, playing Tyranny. We review the nexdock and how it works with the Raspberry Pi 3, Meizu Pro 5 Ubuntu Phone, bq M10 FHD Ubuntu Tablet, Android, Dragonboard 410c, Roku, Chomecast, Amazon FireTV and laptops from Dell and Entroware. We share a Command Line Lurve: netdiscover – A network address discovering tool sudo apt install netdiscover sudo netdiscover The output looks something like this: _____________________________________________________________________________ IP At MAC Address Count Len MAC Vendor / Hostname ----------------------------------------------------------------------------- 192.168.2.2 fe:ed:de:ad:be:ef 1 42 Unknown vendor 192.168.2.1 da:d5:ba:be:fe:ed 1 60 TP-LINK TECHNOLOGIES CO.,LTD. 192.168.2.11 ba:da:55:c0:ff:ee 1 60 BROTHER INDUSTRIES, LTD. 192.168.2.30 02:02:de:ad:be:ef 1 60 Elitegroup Computer Systems Co., Ltd. 192.168.2.31 de:fa:ce:dc:af:e5 1 60 GIGA-BYTE TECHNOLOGY CO.,LTD. 192.168.2.107 da:be:ef:15:de:af 1 42 16) 192.168.2.109 b1:gb:00:bd:ba:be 1 60 Denon, Ltd. 192.168.2.127 da:be:ef:15:de:ad 1 60 ASUSTek COMPUTER INC. 192.168.2.128 ba:df:ee:d5:4f:cc 1 60 ASUSTek COMPUTER INC. 192.168.2.101 ba:be:4d:ec:ad:e5 1 42 Roku, Inc 192.168.2.106 ba:da:55:0f:f1:ce 1 42 LG Electronics 192.168.2.247 f3:3d:de:ad:be:ef 1 60 Roku, Inc 192.168.3.2 ba:da:55:c0:ff:33 1 60 Raspberry Pi Foundation 192.168.3.1 da:d5:ba:be:f3:3d 1 60 TP-LINK TECHNOLOGIES CO.,LTD. 192.168.2.103 da:be:ef:15:d3:ad 1 60 Unknown vendor 192.168.2.104 b1:gb:00:bd:ba:b3 1 42 Unknown vendor And we go over all your amazing feedback – thanks for sending it – please keep sending it! This weeks cover image is taken from Flickr. That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit. Join us in the Ubuntu Podcast Chatter group on Telegram [...]


Media Files:
http://static.ubuntupodcast.org/ubuntupodcast/s09/e40/ubuntupodcast_s09e40.mp3




Daniel Pocock: Using a fully free OS for devices in the home

Thu, 01 Dec 2016 13:11:03 +0000

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes. More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks. On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets. Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too. For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI. Which boxes to start with? There are various considerations when going down this path: Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS. How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself? Is a completely silent/fanless solution necessary? Is it possibly to completely avoid embedded microcode and firmware? How many other free software developers are using the same box, or will you be first? Discussing these options I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices. [...]






Zygmunt Krynicki: Ubuntu Core Gadget Snaps

Thu, 01 Dec 2016 08:29:05 +0000

Gagdet snaps, the somewhat mysterious part of snappy that few people grok. Being a distinct snap type, next to kernel, os and the most common app types, it gets some special roles. If you are on a classic system like Ubuntu, Debian or Fedora you don't really need or have one yet. Looking at all-snap core devices you will always see one. In fact, each snappy reference platform has one. But where are they?

Up until now the gadget snaps were a bit hard to find. They were out there but you had to have a good amount of luck and twist your tongue at the right angle to find them. That's all changed now. If you look a https://github.com/snapcore you will see a nice, familiar pattern of devicename-gadget. Each repository is dedicated to one device so you will see a gadget snap for Raspberry Pi 2 or Pi 3, for example.

But there's more! Each of those github repositories is linked to a launchpad project that automatically mirrors the git repository, builds the snap and uploads it to the store and publishes the snap to the edge channel!

The work isn't over, as you will see the gadget snaps are mostly in binary form, hand-made to work but still a bit too mysterious. The Canonical Foundations team is working on building them in a way that is friendlier to community and easier to trace back to their source code origins.

If you'd like to learn more about this topic then have a look at the snapd wiki page for gadget snaps.



Eric Hammond: Amazon Polly Text To Speech With aws-cli and Twilio

Wed, 30 Nov 2016 18:30:00 +0000

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices. Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family. This article describes some simple fooling around I did with this new service. Deliver Amazon Polly Speech By Phone Call With Twilio I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech. The basic approach is: Generate the speech audio using Amazon Polly. Upload the resulting audio file to S3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected. Here are some sample commands to accomplish this: 1- Generate Speech Audio With Amazon Polly Here’s a simple example of how to turn text into speech, using the latest aws-cli: text="Hello. This speech is generated using Amazon Polly. Enjoy!" audio_file=speech.mp3 aws polly synthesize-speech \ --output-format "mp3" \ --voice-id "Salli" \ --text "$text" \ $audio_file You can listen to the resulting output file using your favorite audio player: mpg123 -q $audio_file 2- Upload Audio to S3 Create or re-use an S3 bucket to store the audio files temporarily. s3bucket=YOURBUCKETNAME aws s3 mb s3://$s3bucket Upload the generated speech audio file to the S3 bucket. I use a long, random key for a touch of security: s3key=audio-for-twilio/$(uuid -v4 -FSIV).mp3 aws s3 cp --acl public-read $audio_file s3://$s3bucket/$s3key For easy cleanup, you can use a bucket with a lifecycle that automatically deletes objects after a day or thirty. See instructions below for how to set this up. 3- Initiate Call With Twilio Once you have set up an account with Twilio (see pointers below if you don’t have one yet), here are sample commands to initiate a phone call and play the Amazon Polly speech audio: from_phone="+1..." # Your Twilio allocated phone number to_phone="+1..." # Your phone number to call TWILIO_ACCOUNT_SID="..." # Your Twilio account SID TWILIO_AUTH_TOKEN="..." # Your Twilio auth token speech_url="http://s3.amazonaws.com/$s3bucket/$s3key" twimlet_url="http://twimlets.com/message?Message%5B0%5D=$speech_url" curl -XPOST https://api.twilio.com/2010-04-01/Accounts/$TWILIO_ACCOUNT_SID/Calls.json \ -u "$TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN" \ --data-urlencode "From=$from_phone" \ --data-urlencode "To=to_phone" \ --data-urlencode "Url=$twimlet_url" The Twilio web service will return immediately after queuing the phone call. It could take a few seconds for the call to be initiated. Make sure you listen to the phone call as soon as you answer, as Twilio starts playing the audio immediately. The ringspeak Command For your convenience (actually for mine), I’ve put together a command line program that turns all the above into a single command. For example, I can now type things like: ... || ringspeak --to +1NUMBER "Please review the cron job failure messages" or: ringspeak --at 6:30am \ "Good morning!" \ "Breakfast is being served now in Venetian Hall G.." \ "Werners keynote is at 8:30." Twilio credentials, default phone numbers, S3 bucket configuration, and Amazon P[...]



Elizabeth K. Joseph: Ohio LinuxFest 2016

Wed, 30 Nov 2016 18:29:44 +0000

Last month I had the pleasure of finally attending an Ohio LinuxFest. The conference has been on my radar for years, but every year I seemed to have some kind of conflict. When my Tour of OpenStack Deployment Scenarios was accepted I was thrilled to finally be able to attend. My employer at the time also pitched in to the conference as a Bronze sponsor and by sending along a banner that showcased my talk, and my OpenStack book! The event kicked off on Friday and the first talk I attended was by Jeff Gehlbach on What’s Happening with OpenNMS. I’ve been to several OpenNMS talks over the years and played with it some, so I knew the background of the project. This talk covered several of the latest improvements. Of particular note were some of their UI improvements, including both a website refresh and some stunning improvements to the WebUI. It was also interesting to learn about Newts, the time-series data store they’ve been developing to replace RRDtool, which they struggled to scale with their tooling. Newts is decoupled from the visualization tooling so you can hook in your own, like if you wanted to use Grafana instead. I then went to Rob Kinyon’s Devs are from Mars, Ops are from Venus. He had some great points about communication between ops, dev and QA, starting with being aware and understanding of the fact that you all have different goals, which sometimes conflict. Pausing to make sure you know why different teams behave the way they do and knowing that they aren’t just doing it to make your life difficult, or because they’re incompetent, makes all the difference. He implored the audience to assume that we’re all smart, hard-working people trying to get our jobs done. He also touched upon improvements to communication, making sure you repeat requests in your own words so misunderstandings don’t occur due to differing vocabularies. Finally, he suggested that some cross-training happen between roles. A developer may never be able to take over full time for an operator, or vice versa, but walking a mile in someone else’s shoes helps build the awareness and understanding that he stresses is important. The afternoon keynote was given by Catherine Devlin on Hacking Bureaucracy with 18F. She works for the government in the 18F digital services agency. Their mandate is to work with other federal agencies to improve their digital content, from websites to data delivery. Modeled after a startup, she explained that they try not to over-plan, like many government organizations do and can lead to failure, they want to fail fast and keep iterating. She also said their team has a focus on hiring good people and understanding the needs of the people they serve, rather than focusing on raw technical talent and the tools. Their practices center around an open by default philosophy (see: 18F: Open source policy), so much of their work is open source and can be adopted by other agencies. They also make sure they understand the culture of organizations they work with so that the tools they develop together will actually be used, as well as respecting the domain knowledge of teams they’re working with. Slides from her talk here, and include lots of great links to agency tooling they’ve worked on: https://github.com/catherinedevlin/olf-2016-keynote Catherine Devlin on 18F That evening folks gathered in the expo hall to meet and eat! That’s where I caught up with my friends from Computer Reach. This is the non-profit I went to Ghana with back in 2012 to deploy Ubuntu-based de[...]



Valorie Zimmerman: KDE Developer Guide needs a new home and some fresh content

Tue, 29 Nov 2016 06:31:10 +0000

As I just posted in the Mission Forum, our KDE Developer Guide needs a new home. Currently it is "not found" where it is supposed to be.

UPDATE: Nicolas found the PDF on archive.org, which does have the photos too. Not as good as the xml, but better than nothing.

We had great luck using markdown files in git for the chapters of the Frameworks Cookbook, so the Devel Guide should be stored and developed in a like manner. I've been reading about Sphinx lately as a way to write documentation, which is another possibility. Kubuntu uses Sphinx for docs.

In any case, I do not have the time or skills to get, restructure and re-place this handy guide for our GSoC students and other new KDE contributors.

This is perhaps suitable for a Google Code-in task, but I would need a mentor who knows markdown or Sphinx to oversee. Contact me if interested! #kde-books or #kde-soc



Jono Bacon: Luma Giveaway Winner – Garrett Nay

Tue, 29 Nov 2016 00:08:22 +0000

I little while back I kicked off a competition to give away a Luma Wifi Set. The challenge? Share a great community that you feel does wonderful work. The most interesting one, according to yours truly, gets the prize. Well, I am delighted to share that Garrett Nay bags the prize for sharing the following in his comment: I don’t know if this counts, since I don’t live in Seattle and can’t be a part of this community, but I’m in a new group in Salt Lake City that’s modeled after it. The group is Story Games Seattle: http://www.meetup.com/Story-Games-Seattle/. They get together on a weekly+ basis to play story games, which are like role-playing games but have a stronger emphasis on giving everyone at the table the power to shape the story (this short video gives a good introduction to what story games are all about, featuring members of the group: Story Games from Candace Fields on Vimeo. Story games seem to scratch a creative itch that I have, but it’s usually tough to find friends who are willing to play them, so a group dedicated to them is amazing to me. Getting started in RPGs and story games is intimidating, but this group is very welcoming to newcomers. The front page says that no experience with role-playing is required, and they insist in their FAQ that you’ll be surprised at what you’ll be able to create with these games even if you’ve never done it before. We’ve tried to take this same approach with our local group. In addition to playing published games, they also regularly playtest games being developed by members of the group or others. As far as productivity goes, some of the best known story games have come from members of this and sister groups. A few examples I’m aware of are Microscope, Kingdom, Follow, Downfall, and Eden. I’ve personally played Microscope and can say that it is well designed and very polished, definitely a product of years of playtesting. They’re also productive and engaging in that they keep a record on the forums of all the games they play each week, sometimes including descriptions of the stories they created and how the games went. I find this very useful because I’m always on the lookout for new story games to try out. I kind of wish I lived in Seattle and could join the story games community, but hopefully we can get our fledgling group in Salt Lake up to the standard they have set. What struck me about this example was that it gets to the heart of what community should be and often is – providing a welcoming, supportive environment for people with like-minded ideas and interests. While much of my work focuses on the complexities of building collaborative communities with the intricacies of how people work together, we should always remember the huge value of what I refer to as read communities where people simply get together to have fun with each other. Garrett’s example was a perfect summary of a group doing great work here. Thanks everyone for your suggestions, congratulations to Garrett for winning the prize, and thanks to Luma for providing the prize. Garrett, your Luma will be in the mail soon! The post Luma Giveaway Winner – Garrett Nay appeared first on Jono Bacon.[...]



The Fridge: Ubuntu Weekly Newsletter Issue 489

Mon, 28 Nov 2016 22:37:15 +0000

Welcome to the Ubuntu Weekly Newsletter. This is issue #489 for the week November 21 – 27, 2016, and the full version is available here. In this issue we cover: Welcome New Members and Developers Ubuntu Stats Ubucon Europe 2016 – Days 2 & 3 UbuCon Europe in the retrospective LoCo Events Forums Council: New SuperModerators Appointed Aurelien Gateau: Gwenview Importer is back New snapd 2.18 release and new candidate core snap Canonical News System76 Oryx Pro review: Linux in a laptop has never been better In The Blogosphere Other Articles of Interest Featured Audio and Video Weekly Ubuntu Development Team Meetings Upcoming Meetings and Events Updates and Security for 12.04, 14.04, 16.04 and 16.10 And much more! The issue of The Ubuntu Weekly Newsletter is brought to you by: Paul White Chris Guiver Elizabeth K. Joseph David Morfin And many others If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki! Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License[...]



Seif Lotfy: Rustifying IronFunctions

Mon, 28 Nov 2016 21:47:56 +0000

(image)

As mentioned in my previous blog post there is new open-source, lambda compatible, on-premise, language agnostic, server-less compute service called IronFunctions.
(image)

While IronFunctions is written in Go. Rust is still very much admired language and it was decided to add support for it in the fn tool.

So now you can use the fn tool to create and publish functions written in rust.

Using rust with functions

The easiest way to create a iron function in rust is via cargo and fn.

Prerequisites

First create an empty rust project as follows:

$ cargo init --name func --bin

Make sure the project name is func and is of type bin. Now just edit your code, a good example is the following "Hello" example:

use std::io;  
use std::io::Read;

fn main() {  
    let mut buffer = String::new();
    let stdin = io::stdin();
    if stdin.lock().read_to_string(&mut buffer).is_ok() {
        println!("Hello {}", buffer.trim());
    }
}

You can find this example code in the repo.

Once done you can create an iron function.

Creating a function

$ fn init --runtime=rust /

in my case its fn init --runtime=rust seiflotfy/rustyfunc, which will create the func.yaml file required by functions.

Building the function

$ fn build

Will create a docker image / (again in my case seiflotfy/rustyfunc).

Testing

You can run this locally without pushing it to functions yet by running:

$ echo Jon Snow | fn run
Hello Jon Snow  

Publishing

In the directory of your rust code do the following:

$ fn publish -v -f -d ./

This will publish you code to your functions service.

Running it

Now to call it on the functions service:

$ echo Jon Snow | fn call seiflotfy rustyfunc 

which is the equivalent of:

$ curl -X POST -d 'Jon Snow' http://localhost:8080/r/seiflotfy/rustyfunc

Next

In the next post I will be writing a more computation intensive rust function to test/benchmark IronFunctions, so stay tune :D




Ubuntu Podcast from the UK LoCo: S09E39.2 – Le CrossOver Number 2 - Ubuntu Podcast

Mon, 28 Nov 2016 15:00:15 +0000

It’s Le CrossOver #2! Marius Quabeck, Rudy, Martin Wimpress and Max Kristen are connected and speaking to your brain.

Four complete strangers make a podcast during UbuCon Europe 2016 at the Unperfekthaus in Essen, Germany.

That’s all for Le CrossOver #2! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.


Media Files:
http://static.ubuntupodcast.org/ubuntupodcast/s09/e39.2/ubuntupodcast_s09e39.2.mp3




Sujeevan Vijayakumaran: UbuCon Europe in the retrospective

Sun, 27 Nov 2016 11:30:00 +0000

Last weekend the very first UbuCon Europe took place in Essen, Germany. It was the second UbuCon where I was the head of the organisation team. But this one was the first international UbuCon, which had a few more challenges compared to a national UbuCon. ;) This blog posts focuses on both: the event itself and some information about the organisation. Thursday The first unofficial day of the UbuCon was Thursday, where some people already arrived from different countries. We were already ten people from five different countries and we visited the Christmas market in Essen, which opened on that day. Gladly we had Nathan Haines with us, so he could translate all the alcoholic drinks from German to English, because I don't know anything about that. ;) Friday The first official day started in the afternoon with a guided tour through Zeche Zollverein. We were 18 people, this time from eight different countries. The tour showed us the history of the local area with the coal mines which were active in the past. They showed us the whole production line from the coal mining to the processing. The tour took two hours and after that we went to the Unperfekthaus, where the first social event of the weekend took place. There, we were roughly fifty persons mostly drinking, eating and talking. It was also the first chance to see familiar and new faces again! Saturday Saturday started with my quick introduction to the event. After that Canonical CEO Jane Silber hold the first keynote where she talked mostly about the IoT and the Cloud. I was glad that she followed my invitation, even though she had to leave after lunch. The day was packed with different talks and workshops. I sadly couldn't join every talk but the talks from Microsoft about "Bash on Ubuntu on Windows" was quite interesting. Laura Czajkowskis talk about "Supporting Inclusion & Involvement in a Remote Distributed Team" was short but also interesting. The day ended with the raffle and the UbuCon Quiz. Everyone could buy an unlimited amount of raffle ticket for 1€ so there were a few people with more than ten tickets. We mostly had different Ubuntu USB-Sticks, three Ubuntu Books, Microsoft T-Shirts, a Nextcloud Box and the bq Aquaris M10 Tablet which were pretty popular. Funnily some people won more than one prize. The UbuCon Quiz afterwards was funny too. The ultimate answer to every question seemed to be "Midnight Commander" :). After the quiz the second social event started and was joined by about 80 persons. Sunday After the long Saturday the started again at around 10 o'clock in the morning. There were different talks and workshops again. Daniel Holbach did a workshop on how to create snaps, Costales did a talk about his navigation app uNav. Later Alan Pope talked about how to bring an app as a snap to the store. Elizabeth K. Joseph was talking on how to build a career with Ubuntu and FOSS and Olivier Paroz talked about Nextcloud and the upcoming features. The day and also the conference ended on 5pm. At that time many people were already on their way back home. Conclusion We've welcomed 130 persons from 17 different countries and three continents. Originally I didn't expect that many people from other countries. In the end there were 55 % attendees from Germany. In the last year we had a similar amount of people who attended the German UbuCon[...]



Colin King: stress-ng 0.07.07 released

Sat, 26 Nov 2016 10:35:07 +0000

stress-ng is a tool that I have been developing on-and-off for a few years. It is designed to stress kernels to force out bugs, stress CPU and memory and also contains some performance benchmarking metrics too.stress-ng is now entering the maturity part of the development phase, however, there is always scope to add new stressors and generally improve the tool.   I've just released version 0.07.07 for the Ubuntu Zesty 17.04 release and it contains a few additional features:SIGUSR2 sent to stress-ng will dump out the current system load and memory statisticsSched policy stress tests for different scheduler configurationsAdd a missing --sockfd-port optionAnd various bug fixes:Fixed up some minor memory leaksMissing counter stats on bind-mount, fp-error, personality and resources stressorsFix the --fiemap-bytes optionFix up build warnings with various compilers and static analyzers The major change to stress-ng over the past month was an internal re-working of system call and GNU features to abstract these into a shim layer to reduce the number build conditional #ifdef paths around code. This simplifies portability, so the code now builds more easily across a range of systems and with various versions of gcc and clang and fixes some issues on older kernels too.   This makes the code also faster to statically analyze with cppcheck.For more details, visit the stress-ng project page or the quick help guide. [...]



Julian Andres Klode: Starting the faster, more secure APT 1.4 series

Fri, 25 Nov 2016 23:43:32 +0000

We just released the first beta of APT 1.4 to Debian unstable (beta here means that we don’t know any other big stuff to add to it, but are still open to further extensions). This is the release series that will be released with Debian stretch, Ubuntu zesty, and possibly Ubuntu zesty+1 (if the Debian freeze takes a very long time, even zesty+2 is possible). It should reach the master archive in a few hours, and your mirrors shortly after that. Security changes APT 1.4 by default disables support for repositories signed with SHA1 keys. I announced back in January that it was my intention to do this during the summer for development releases, but I only remembered the Jan 1st deadline for stable releases supporting that (APT 1.2 and 1.3), so better late than never. Around January 1st, the same or a similar change will occur in the APT 1.2 and 1.3 series in Ubuntu 16.04 and 16.10 (subject to approval by Ubuntu’s release team). This should mean that repository provides had about one year to fix their repositories, and more than 8 months since the release of 16.04. I believe that 8 months is a reasonable time frame to upgrade a repository signing key, and hope that providers who have not updated their repositories yet will do so as soon as possible. Performance work APT 1.4 provides a 10-20% performance increase in cache generation (and according to callgrind, we went from approx 6.8 billion to 5.3 billion instructions for my laptop’s configuration, a reduction of more than 21%). The major improvements are: We switched the parsing of Deb822 files (such as Packages files) to my perfect hash function TrieHash. TrieHash – which generates C code from a set of words – is about equal or twice as fast as the previously used hash function (and two to three times faster than gperf), and we save an additional 50% of that time as we only have to hash once during parsing now, instead of during look up as well. APT 1.4 marks the first time TrieHash is used in any software. I hope that it will spread to dpkg and other software at a later point in time.vendors. Another important change was to drop normalization of Description-MD5 values, the fields mapping a description in a Packages files to a translated description. We used to parse the hex digits into a native binary stream, and then compared it back to hex digits for comparisons, which cost us about 5% of the run time performance. We also optimized one of our hash functions – the VersionHash that hashes the important fields of a package to recognize packages with the same version, but different content – to not normalize data to a temporary buffer anymore. This buffer has been the subject of some bugs (overflow, incompleteness) in the recent past, and also caused some slowdown due to the additional writes to the stack. Instead, we now pass the bytes we are interested in directly to our CRC code, one byte at a time. There were also some other micro-optimisations: For example, the hash tables in the cache used to be ordered by standard compare (alphabetical followed by shortest). It is now ordered by size first, meaning we can avoid data comparisons for strings of different lengths. We also got rid of a std::string that cannot use short string optimisation in a hot path of the code. Finally, we also converted our case[...]



Dougie Richardson: Install Android Studio on Ubuntu

Fri, 25 Nov 2016 22:16:50 +0000

Android Studio is a great development environment and is available on Ubuntu. I’m using Ubuntu Mate 16.10 “Yakkety Yak”.   First install a Java Development Kit (JDK). OpenJDK is pre-installed or you can use Oracle Java 8 (there is a great guide here). I don’t wish to argue over your choice – I need to use the latter (my tutor does). Download Android Studio here. – I extracted it to /opt; ran the installer; and used my home folder for the SDK. If you are using 64 bit, you need the 32 bit GNU standard C++ library: sudo apt install lib32stdc++6 Virtualisation support is interesting. I read two tutorial and Google’s guide. The former makes reference to command line options not in version 2.2.2. These posts suggest this is a bug, but it may now be default behaviour. First enable that virtualisation in BIOS (check if enabled using “kvm-ok”). sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils sudo adduser dougie kvm sudo adduser dougie libvirtd This results in an error. Using the system version of libstdc++.so.6 works. Add the following to /etc/environment: ANDROID_EMULATOR_USE_SYSTEM_LIBS=1 It seems snappy but with no feedback I’m unsure if accelerated. So I now have a development environment set up for my project. The next hurdle is to choose a title. So far it is a: development project; distributed application; and uses Android.[...]



Stephan Adig: How to create LXD Containers with Ansible 2.2

Fri, 25 Nov 2016 13:50:00 +0000

(Working example from this post you can find on my GitHub Page) While working with Ansible since a couple of years now and working with LXD as my local test environment I was waiting for a simple solution to create LXD containers (locally and remote) with Ansible from scratch. Not using any helper methods like shell: lxd etc. So, since Ansible 2.2 we have native LXD support. Furthermore, the Ansible Team actually showed some respect to the Python3 Community, and has implemented Python3 Support. Preparations First of all, you need to have the latest Ansible Release), or install it in a Python3 Virtual Environment via pip install ansible. Create your Ansible directory layout To make your life later a little bit easier, create your Ansible directory structure and turn it to a Git repository. user@home: ~> mkdir -p ~/Projects/git.ansible/lxd-containers user@home: ~> cd ~/Projects/git.ansible/lxd-containers user@home: ~/Projects/git.ansible/lxd-containers> mkdir -p {inventory,roles,playbooks} Create your inventory file Imagine, you want to create 5 new LXD containers. You can create 5 playbooks to do it, or you can be smart, and let Ansible do it for you. Working with inventory files is easy, it's simply a file with an INI file structure. Let's create an inventory file for new LXD containers in ~/Projects/git.ansible/lxd-containers/inventory/containers: [local] localhost [containers] blog-01 ansible_connection=lxd blog-02 ansible_connection=lxd blog-03 ansible_connection=lxd blog-04 ansible_connection=lxd blog-05 ansible_connection=lxd We defined now 5 containers. Create a playbook for running Ansible We need now an Ansible playbook. A playbook is just a simple YAML file. You can edit this file with your editor of choice. I personally like Sublime Text 3 or GitHubs Atom, but any other editor (like Vim or Emacs) will do. Create a new file under ~/Projects/git.ansible/lxd-containers/playbooks/lxd_create_containers.yml: - hosts: localhost connection: local roles: - create_lxd_containers Let's go shortly through this: hosts: defines: the hosts to run Ansible on. Using it like this means, this playbook runs on your local machine. connection: local: Ansible will use a local connection, like sshing into your local box. roles: ...: is a list of Ansible roles to be used during this playbook. You could also write all Ansible tasks in this playbook, but as you want to reuse several tasks for certain workloads, it's a better idea to divide them into roles. Create the the Ansible role Ansible Roles are being used for separating repeating tasks from the playbooks. Think about this example: You have a playbook for all your webservers like this: - hosts: webservers tasks: - name: apt update apt: update_cache=yes and you have a playbook for all your database servers like this: - hosts: databases tasks: - name: apt update apt: update_cache=yes What do you see? Yes, two times the same task, namely "apt update". To make our lives easier, instead of writing in every playbook a task to update the systems package archive cache, we create an Ansible role. Ansible Roles do have a special directory structure, I advise to read the good documention ove[...]



Sebastian Dröge: Writing GStreamer Elements in Rust (Part 3): Parsing data from untrusted sources like it’s 2016

Thu, 24 Nov 2016 23:10:52 +0000

And again it took quite a while to write a new update about my experiments with writing GStreamer elements in Rust. The previous articles can be found here and here. Since last time, there was also the GStreamer Conference 2016 in Berlin, where I had a short presentation about this. Progress was rather slow unfortunately, due to work and other things getting into the way. Let’s hope this improves. Anyway! There will be three parts again, and especially the last one would be something where I could use some suggestions from more experienced Rust developers about how to solve state handling / state machines in a nicer way. The first part will be about parsing data in general, especially from untrusted sources. The second part will be about my experimental and current proof of concept FLV demuxer. Parsing Data Safety? First of all, you probably all saw a couple of CVEs about security relevant bugs in (rather uncommon) GStreamer elements going around. While all of them would’ve been prevented by having the code written in Rust (due to by-default array bounds checking), that’s not going to be our topic here. They also would’ve been prevented by using various GStreamer helper API, like GstByteReader, GstByteWriter and GstBitReader. So just use those, really. Especially in new code (which is exactly the problem with the code affected by the CVEs, it was old and forgotten). Don’t do an accountant’s job, counting how much money/many bytes you have left to read. But yes, this is something where Rust will also provide an advantage by having by-default safety features. It’s not going to solve all our problems, but at least some classes of problems. And sure, you can write safe C code if you’re careful but I’m sure you also drive with a seatbelt although you can drive safely. To quote Federico about his motivation for rewriting (parts of) librsvg in Rust: Every once in a while someone discovers a bug in librsvg that makes it all the way to a CVE security advisory, and it’s all due to using C. We’ve gotten double free()s, wrong casts, and out-of-bounds memory accesses. Recently someone did fuzz-testing with some really pathological SVGs, and found interesting explosions in the library. That’s the kind of 1970s bullshit that Rust prevents. You can directly replace the word librsvg with GStreamer here. Ergonomics The other aspect with parsing data is that it’s usually a very boring aspect of programming. It should be as painless as possible, as easy as possible to do it in a safe way, and after having written your 100th parser by hand you probably don’t want to do that again. Parser combinator libraries like Parsec in Haskell provide a nice alternative. You essentially write down something very close to a formal grammar of the format you want to parse, and out of this comes a parser for the format. Other than parser generators like good, old yacc, everything is written in target language though, and there is no separate code generation step. Rust, being quite a bit more expressive than C, also made people write parser generator libraries. They are all not as ergonomic (yet?) as in Haskell, but still a big improvement over anything else. There’s nom, combine and chomp. All havi[...]


Media Files:
http://streams.videolan.org/samples/FLV/asian-commercials-are-weird.flv




Aurélien Gâteau: Gwenview Importer is back

Thu, 24 Nov 2016 06:50:27 +0000

I spent some time over the last weeks to port Gwenview Importer to KDE Frameworks 5, as I was getting frustrated with importing pictures by hand. It's a straight port: no new features.

Here is a screenshot after I filled my SD Card with random pictures of my daughter and cat for the purpose of illustrating this blog post :)

(image)

I missed the KDE Applications 16.12 deadline, but the code is in Gwenview master now, so Gwenview Importer should be in the next KDE Applications release.




Jono Bacon: Microsoft and Open Source: A New Era?

Wed, 23 Nov 2016 16:00:27 +0000

Last week the Linux Foundation announced Microsoft becoming a Platinum member. In the eyes of some, hell finally froze over. For many though, myself included, this was not an entirely surprising move. Microsoft are becoming an increasingly active member of the open source community, and they deserve credit for this continual stream of improvements. When I first discovered open source in 1998, the big M were painted as a bit of a villain. This accusation was largely fair. The company went to great lengths to discredit open source, including comparing Linux to a cancer, patent litigation, and campaigns formed of misinformation and FUD. This rightly left a rather sour taste in the mouths of open source supporters. The remnants of that sour taste are still strong in some. These folks will likely never trust the Redmond mammoth, their decisions, or their intent. While I am not condoning these prior actions from the company, I would argue that the steady stream of forward progress means that…and I know this will be a tough pill to swallow for some of you…means that it is time to forgive and forget. Forward Progress This forward progress is impressive. They released their version of FreeBSD for Azure. They partnered with Canonical to bring Ubuntu user-space to Windows (as well as supporting Debian on Azure and even building their own Linux distribution, the Azure Cloud Switch). They supported an open source version of .NET, known as Mono, later buying Xamarin who led this development and open sourced those components. They brought .NET core to Linux, started their own Linux certification, released a litany of projects (including Visual Studio Code) as open source, founded the Microsoft Open Technologies group, and then later merged the group into the wider organization as openness was a core part of the company. Satya Nadella, seemingly doing a puppet show, without the puppet. My personal experience with them has reflected this trend. I first got to know the company back in 2001 when I spoke at a DeveloperDeveloperDeveloper day in the UK. Over the years I flew out to Redmond to provide input on initiatives such as .NET, got to know the Microsoft Open Technologies group, and most recently signed the company as a client where I am helping them to build the next generation of their MVP and RD community. Microsoft are not begrudgingly supporting open source, they are actively pursuing it. As such, this recent announcement from the Linux Foundation wasn’t a huge surprise to me, but was an impressive formal articulation of Microsoft’s commitment to open source. Leaders at Microsoft and the Linux Foundation should be both credited with this additional important step in the right direction, not just for Microsoft, but for the wider acceptance and growth of open source and collaboration. Work In Progress Now, some of the critics will be reading this and will cite many examples of Microsoft still acting as the big bad wolf. You are perfectly right to do so. So, let me zone in on this. I am not suggesting they are perfect. They aren’t. Companies are merely vessels of people, some of which will still continue to have antiquated perspe[...]



Forums Council: New SuperModerators Appointed

Wed, 23 Nov 2016 03:39:26 +0000

We the Forum Council are happy to announce that Wild Man and DuckHook have been added as Supermods. Both have shown a willingness to help and have done an exemplary job as moderators, and are now ready to move on to taking a more administrative role on the Forums.


(image) (image)



Costales: Ubucon Europe 2016 - Day 3

Mon, 21 Nov 2016 20:27:04 +0000

Me tocó abrir este último día de la Ubucon con una charla sobre uNav. Tras el "You have arrived at your presentation" espontáneo de Nathan, conté la historia y algunos trucos de esta aplicación tan querida por la comunidad y mostré en exclusiva el vídeo original del spot.uNav's talkLlenazo :))Apenas pude llegar a la charla de Daniel sobre snaps que coincidía con la mía.El resto del día lo pasé compartiendo ratos con distintas personas, más que en las charlas. Rudy me entrevistó para uno de sus geniales podcasts.De las charlas a las que asistí destaco las de Rudy y Alan, muy dicharacheros y amenos ambos.Alan's talkUbuntu FR are doing an incredible work!!El día cerró anunciando que sería Paris la siguiente en organizar la 2ª Ubucon Europe, aunque aún sin fecha.Tras los eventos cada cual se piraba por su lado. Eché en falta mejor organización al respecto. Así que pasamos parte de la noche de nuevo los portugueses y españoles. Incluso cayó la idea de una Ubucon Iberia :))La noche la cerramos Joan, Gonzalo y Juanfra echando unas partidas al futbolín en un pub.Me reservo decir quien ganó y perdió :PTras los 3 días de Ubucon, me quedo sin lugar a dudas con cada una de las personas que conocí y compartí grandes momentos.Con Robin de UbuntuFunCon Sergi y JoanHasta la próxima Essen[...]



Eric Hammond: Watching AWS CloudFormation Stack Status

Mon, 21 Nov 2016 09:00:00 +0000

live display of current event status for each stack resource Would you like to be able to watch the progress of your new CloudFormation stack resources like this? (press play) That’s what the output of the new aws-cloudformation-stack-status command looks like when I launch a new AWS Git-backed Static Website CloudFormation stack. It shows me in real time which resources have completed, which are still in progress, and which, if any, have experienced problems. Background AWS provides a few ways to look at the status of resources in a CloudFormation stack including the stream of stack events in the Web console and in the aws-cli. Unfortunately, these displays show multiple events for each resource (e.g., CREATE_IN_PROGRESS, CREATE_COMPLETE) and it’s difficult to match up all of the resource events by hand to figure out which resources are incomplete and still in progress. Solution I created a bit of wrapper code that goes around the aws cloudformation describe-stack-events command. It performs these operations: Cuts the output down to the few fields that matter: status, resource name, type, event time. Removes all but the ost recent status event for each stack resource. Sorts the output to put the resources with the most recent status changes at the top. Repeatedly runs this command so that you can see the stack progress live and know exactly which resource is taking the longest. I tossed the simple script up here in case you’d like to try it out: GitHub: aws-cloudformation-stack-status You can run it to monitor your CloudFormation stack with this command: aws-cloudformation-stack-status --watch --region $region --stack-name $stack Interrupt with Ctrl-C to exit. Note: You will probably need to start your terminal out wider than 80 columns for a clean presentation. Note: This does use the aws-cli, so installing and configuring that is a prerequisite. Stack Delete Example Here’s another example terminal session watching a stack-delete operation, including some skipped deletions (because of a retention policy). It finally ends with a “stack not found error” which is exactly what we hope for after a stack has been deleted successfully. Again, the resources with the most recent state change events are at the top. Note: These sample terminal replays cut out almost 40 minutes of waiting for the creation and deletion of the CloudFront distributions. You can see the real timestamps in the rightmost columns. Original article and comments: https://alestic.com/2016/11/aws-cloudformation-stack-status/[...]



Stephen Michael Kellat: Ubuntu Community Appreciation Day 2016

Mon, 21 Nov 2016 03:28:00 +0000

(image)

A screenshot of Ubuntu Planet showing a blog post by Svetlana Belkin

I had almost forgotten about Ubuntu Community Appreciation Day 2016. There is much for me to appreciate this year. I have had to absent myself from many community activities and functions for almost the entire year. There have been random blog posts and I have popped up on mailing lists at the weirdest of times but I have mostly been gone.

Being under audit and investigation for almost the entirety of 2016 can do that to you. Working in a government job also causes such things to happen, too. Thankfully I’m not moving onward and upward to higher office but I’m now thoroughly vetted for all sorts of lateral moves.

(image)

The Xubuntu Sticker from SpreadUbuntu.org found at http://spreadubuntu.org/en/material/sticker/xubuntu-sticker made by lyz

I thoroughly appreciate and miss the Xubuntu team. A great distro continues to be made. I wish I was still there to contribute. Life right now says I have other missions to undertake especially as social fabric in the United States of America seems to get all bendy and twisty.

Tomorrow is another day.




Kubuntu General News: Welcome new Kubuntu Members

Sun, 20 Nov 2016 22:57:08 +0000

Friday November 18 was a productive day for the Kubuntu Community, as three new people were questioned and then elected into Membership. Welcome Simon Quigley, José Manuel Santamaría, and Walter Lapchynski as they package, work on our tooling, promote Kubuntu and help users.

Read more about Kubuntu Membership here: https://community.kde.org/Kubuntu/Membership




Svetlana Belkin: Ubuntu Community Appreciation Day 2016

Sun, 20 Nov 2016 18:06:42 +0000

It’s that time of the year where we appreciate the members of our Ubuntu Community, Member or not.

This year I appreciate a group of people and three others.  The group is the one that gone to Ohio Linux Fest this year.  I thank you for all of the fun!

The first person that I appreciate is for his Tweet about me (which explains everything why I choose him), which is  Benjamin Kerensa:

The second person is Simon Quigley who is quite an awesome kid.  Over the last year, he has really changed his attitude and even his behavior where it doesn’t not sound like he is a 14 year-old but older.  Because he is a youngster,  he has a good chance that he will get a job within Open Source, development-wise or anything else.

Last but not least, the last person is Pavel Sayekat. Like Simon, he also has improved and now is helping to get his LoCo, Ubuntu Bangladesh, active again.

Keep it up everyone for making the Community the way it is!