Subscribe: adrianba.net
http://adrianba.net/rss/rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
access  api  create  kwrd string  kwrd  mdi  microsoft  new  oauth  sublime text  support  text  token  twitter  windows 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: adrianba.net

adrianba.net



On land, on sea, and in the ether.



Updated: 2017-10-13T16:31:35-07:00

 



Samsung Smart Switch - How to install on Windows 10

2017-10-13T15:18:00-07:00

Samsung Smart Switch is a tool that will backup your phone and transfer the data to a new Samsung phone. You also use it to backup your phone to your PC in case you need to restore it later.

(image)

You can find Samsung Smart Switch for Mac or PC on the Samsung site. The PC version has a dependency on some relatively old versions of the Visual C++ runtime, which you may not have installed on a newer PC running Windows 10.

You need to install both the x86 and x64 versions of the runtime:

I had managed to install enough apps on my phone that some of them were causing some performance problems (one of the challenges I’ve found with Android). I had decided to back-up my phone, do a factory reset and then restore things like photos and text messages, but install the apps that I wanted from scratch.

It seemed like Smart Switch was a good tool for this. One hiccup was that the program doesn’t appear to have a way to do a partial restore. I had backed up everything on my phone and, when I did the restore, it put everything back the way it had been. Great verification of the back-up/restore process but not exactly what I wanted in this instance.

I had to do a new backup with only the data items that I wanted to restore selected in the preferences. After a second factory reset, this time the restore did exactly what I wanted and I set about restoring the apps I use often from my Google Play store library.

Samsung Smart Switch is a tool that will backup your phone and transfer the data to a new Samsung phone. You also use it to backup your phone to your PC in case you need to restore it later.



NPM giving error EAI_AGAIN from Windows Subsystem for Linux (WSL)

2012-10-12T10:44:00-07:00

This morning I was experimenting with some demo code and when I tried to install some packages from npm I got the following network error:

request to https://registry.npmjs.org/XXX failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org:443

where XXX was the name of the package.

I was running bash from Ubuntu using WSL but when I switched to a Windows command prompt and ran the Windows install of node/npm then it connected just fine.

It took a little searching around but I discovered that this was most likely to do with the network proxy we have at Microsoft or some other quirk of the network configuration. We are currently testing a new on-demand VPN configuration, which had connected me to the corporate network even though I was actually attached directly. Disconnecting the VPN made things “just work” so I guess there was some kind of networking loop that Windows native networking could figure out since it has my login credentials whereas Ubuntu couldn’t.

So the EAI_AGAIN error is a transient failure and is probably related to something you can fix about your network config.

This morning I was experimenting with some demo code and when I tried to install some packages from npm I got the following network error:



Using Microsoft Flow to publish medium.com recommendations

2016-09-24T12:01:17-07:00

Like a growing number of people, I read a lot of content on medium.com. I wanted to find a way to share the posts that I find most interesting.

Over the last couple of weeks I’ve been playing with Microsoft Flow, which allows you to create automated workflows between different apps and services. I’m not on the Flow team - I just think the service is interesting, especially now that it is available for everyone to use. It reminds me of Yahoo Pipes.

(image)

Tweeting out recommended reading from medium.com is very simple using Flow. Medium has an RSS feed with a list of your recommendations at https://medium.com/feed/@_your-medium-id_/has-recommended. You can create a new Flow trigger from this RSS feed so that new entries cause the flow to run. By adding Post a tweet action, you can cause the new entry to post to Twitter. You can use data from the RSS field to populate the tweet. I post the URL and the title of the recommendation.

That’s it. I saved the flow with my Twitter credentials and now whenever I click the recommended button on a medium.com post, I tweet about it within a few minutes.

Like a growing number of people, I read a lot of content on medium.com. I wanted to find a way to share the posts that I find most interesting.



Using Sublime Text as your git editor on Windows

2016-01-06T14:39:48-08:00

Many git commands require using an editor including writing good commit messages. I use Sublime Text as my default text editor and I wanted to make sure git launched Sublime Text appropriately.

On Windows, the sublime_text.exe program has a couple of command line options that help here. The -n option causes a new editing window to be created and the -w option tells Sublime Text to wait until you close the window before returning.

By default, the Sublime Text 3 installer puts everything into “C:\Program Files\Sublime Text 3”. The spaces in the folder names makes the git command to configure the editor slightly trickier than normal. The following command sets up the global editor for git.

git config --global core.editor "\"C:\Program Files\Sublime Text 3\sublime_text.exe\" -n -w"

You can check that the configuration has been correctly stored with

git config --global -l

Many git commands require using an editor including writing good commit messages. I use Sublime Text as my default text editor and I wanted to make sure git launched Sublime Text appropriately.



MDI to TIFF converter

2015-12-20T11:40:07-08:00

Microsoft Office Document Imaging (MODI) was part of Office up to Office 2007. It allowed you to scan multipage documents into .MDI files. MODI was deprecated and removed from Office 2010 and with a modern installation of Windows and Office there is no native support for .MDI files. If you have these files archived then you will need to convert them to another format.

_MDI to TIFF File Converter is a command line tool, which allows you to convert one or more MDI files to TIFF. MDI is a proprietary file format of MODI (Microsoft Office Document Imaging), which was deprecated as part of Office 2010. This conversion tool will allow you to view MDI files after they are converted to TIFF. TIFF files may be viewed using a variety of image viewing programs, such as the Windows Fax and Image Viewer._

The MDI to TIFF File Converter is a command line tool for converting old .MDI files to modern multipage TIFF files, which can be viewed with built-in tools like Windows Photo Viewer.

Run the converter from a command line prompt as follows:

mdi2tif -source _myfile_.mdi -log output.txt

This will produce the following output:

`Converting MDI file myfile.mdi Writing results file output.txt…

Conversion completed. Files with errors: [0/1]

See output.txt for more details. `

myfile.mdi will be converted to myfile.tif.

Microsoft Office Document Imaging (MODI) was part of Office up to Office 2007. It allowed you to scan multipage documents into .MDI files. MODI was deprecated and removed from Office 2010 and with a modern installation of Windows and Office there is no native support for .MDI files. If you have these files archived then you will need to convert them to another format.



Using DVI monitors with the new Surface Dock

2015-12-05T22:46:56-08:00

The new Surface Dock from Microsoft has two mini-DisplayPort outputs that can be used to connect a Surface Book, Surface Pro 3, or Surface Pro 4 to two external monitors.

This makes it possible to connect to VGA, DVI, or HDMI devices with the use of an adapter. However, to work with a DVI input you must use an active DVI adapter. Using a passive DVI cable will not work.

(image) (image) I tried using a passive DVI cable to connect to my Dell IN2020M monitors. This results in the monitor displaying an error message: “The current input timing is not supported by the monitor display. Please change your input timing to 1600x900@60Hz or any other monitor listed timing as per the monitor specifications.

I found the gofanco Gold Plated Mini Displayport to DVI Active Converter on Amazon works well with the Surface Dock and I am now driving my two monitors with a good stable image.(image)

The new Surface Dock from Microsoft has two mini-DisplayPort outputs that can be used to connect a Surface Book, Surface Pro 3, or Surface Pro 4 to two external monitors.



Using Yeoman to start writing technical specifications with ReSpec

2015-03-14T22:27:51-07:00

Yeoman is a tool that provides a scaffolding system to begin new projects. The genius thing about Yeoman is that, by itself, it doesn’t know how to do anything. This flexibility comes from a modular approach that relies on separate generator modules. Each generator knows how to create a particular kind of project (e.g. an Backbone app or a Chrome extension).

ReSpec is a JS library written by Robin Berjon that makes it easier to write technical specifications, or documents that tend to be technical in nature in general. It was originally designed for the purpose of writing W3C specifications, but has since grown to be able to support other outputs as well. On of the best things about ReSpec is its intrinsic understanding of WebIDL. You can outline the design for a new API and it makes it very easy to fill in the description of what the methods and properties do. It also makes it easy to refer to other specs using the SpecRef database.

Bringing these two together, I have create a Yeoman generator called generator-respec that outputs a basic ReSpec document.

Assuming you already have node and npm installed, you can install Yeoman with the command npm install –g yo. After that you should install the ReSpec generator with npm install –g generator-respec.

Now you have the tools installed, create a new folder to hold your specification and from the command prompt in that directory run yo respec. This will prompt you for a title, short-name, spec status, and author information and then create a new index.html document with an outline specification using ReSpec. From here you can edit your spec using the ReSpec documentation as a guide.

The current implementation of generator-respec is very basic. I’m sure there are some obvious things that can be added. One idea I have is to support a subgenerator that creates related specs in the same folder. What else should be added? The generator-respec project is available on GitHub.

Yeoman is a tool that provides a scaffolding system to begin new projects. The genius thing about Yeoman is that, by itself, it doesn’t know how to do anything. This flexibility comes from a modular approach that relies on separate generator modules. Each generator knows how to create a particular kind of project (e.g. an Backbone app or a Chrome extension).



It’s time to move on from IE8

2015-03-12T13:23:27-07:00

Yesterday, my esteemed (new) colleague Aaron Gustafson wrote a piece about his reaction to the “Break Up with Internet Explorer 8” site currently doing the rounds in the Twittersphere. He argues for support of older browsers and optimisation for newer, better browsers. I disagree.

Some people don’t have control over their browsing environment. Some people can’t afford to upgrade to a more recent version of Windows because of business software that is expensive to move forward. This is true but being stuck on IE8 isn’t the common case any longer.

Even Microsoft isn’t going to support IE8 customers after January 2016 and you shouldn’t either. There will be no more security updates for IE8 after that[1]. We all need to move on and we need to continue to encourage organisations to get to IE11 and deploy Enterprise Mode for their legacy applications.

Progressive enhancement is a good goal and something that we should aim for with today’s modern browsers. IE11 has good feature coverage and the new Microsoft EdgeHTML rendering engine that will be used by “Project Spartan” goes considerably further. All the popular browsers are adding lots of new features (track the IE ones at https://status.modern.ie/) and we should make our apps light up in the face of new capabilities. Feature detection is king.

But IE8 is old. It didn’t have support for the old old DOM standards like Core, HTML, Style, Events, etc. You even have to polyfill addEventListener, for goodness sake. Yes, this is all possible (maybe using abstractions like jQuery 1.x) but why should we continue to do this for new work? Why continue to bloat the web for an audience that is shrinking ever faster? Most enterprises we engage with are rushing to get to IE11 before the support policy change comes into effect.

There are two types of web developers in the world. Those that are building and maintaining legacy systems for enterprises that may well have to support IE8 and probably don’t have to worry to much about modern browsers. And those who are targeting Chrome, Firefox, Safari, Opera, and modern IE. This latter category should target nothing older than IE9 and, given that IE11 share has been bigger than IE9 and IE10 combined for almost a year, I argue that you might just support IE11.

Does this apply to everyone? No, of course not. Is this too simplistic? Yes. Should you just cut people off tomorrow? No, again, of course not. But having a transition plan, letting customers know what it is, and then moving to a world where you don’t worry about old legacy IE. That’s the kind of web developer I want to be. Despite helping to bring IE8 to life, I’ve broken up with it.


[1] Unless you have a commercial relationship with Microsoft to provide these, and you are required to be executing on your plan to get off IE8.

Yesterday, my esteemed (new) colleague Aaron Gustafson wrote a piece about his reaction to the “Break Up with Internet Explorer 8” site currently doing the rounds in the Twittersphere. He argues for support of older browsers and optimisation for newer, better browsers. I disagree.



Using C# to access the Twitter API

2014-05-20T08:25:00-07:00

My last post described how to acquire Twitter OAuth keys and tokens to allow you to use Twitter’s API to access Twitter feeds. I showed how to use the request module with node, which has built-in support for OAuth, to request and process data. In this blog post I will show how to do the same thing using C# and .NET using the OAuthBase class linked to from oauth.net. Let’s start with the code to call the Twitter API: using System; using System.IO; using System.Net; using System.Text; using OAuth; class App { static void Main() { // URL for the API to call string url = "https://api.twitter.com/1.1/statuses/user_timeline.json" + "?screen_name=adrianba&count=5"; // Create a http request for the API var webReq = (HttpWebRequest)WebRequest.Create(url); // Set the OAuth header var auth = new OAuthHeader(); webReq.Headers.Add("Authorization",auth.getHeader(url,"GET")); // Echo the response to the console using(WebResponse webResp = webReq.GetResponse()) { using(StreamReader sr = new StreamReader( webResp.GetResponseStream(),Encoding.GetEncoding("utf-8") )) { Console.WriteLine(sr.ReadToEnd()); } } } } The code here is similar to the previous post. It creates a HTTP request to the API endpoint and this time simply writes the response to the console. The difference here is that we need to add the OAuth Authorization header. The magic takes place in the getHeader() method: class OAuthHeader : OAuthBase { public string getHeader(string url,string method) { string normalizedUri; string normalizedParameters; // OAuth keys - FILL IN YOUR VALUES HERE (see this post) const string consumerKey = "..."; const string consumerSecret = "..."; const string token = "..."; const string tokenSecret = "..."; // Create timestamp and nonce for this request string timeStamp = GenerateTimeStamp(); string nonce = Genera[...]



Accessing the Twitter API using OAuth

2014-05-19T08:30:00-07:00

Following on from my last post that described using Node to access feeds from Delicious, I’ve also been investigating how to access my Twitter feed. This adds a little more complexity because Twitter requires that your app or script authenticate to Twitter using OAuth. Per Wikipedia, “OAuth provides client applications a ‘secure delegated access’ to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials.” What this means is that your app can access the Twitter API in an authenticated way using OAuth without having to embed your username and password into the script. The node request library that I mentioned last time has built in support for OAuth authentication. It requires that you populate a JavaScript object as follows: var oauth = { consumer_key: CONSUMER_KEY , consumer_secret: CONSUMER_SECRET , token: OAUTH_TOKEN , token_secret: OAUTH_TOKEN_SECRET }; Each of CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN and OAUTH_TOKEN_SECRET are strings that we must supply as part of the OAuth handshake. There are two ways to think about using OAuth to authenticate against a service such as Twitter depending upon the type of app that you are building. The first scenario is where, for example, you are building a Twitter client. You will distribute this application and each user of the application will authenticate using their own credentials so that they can access information from the service as themselves. In the second scenario you are building an application or service that you want to access the service as you and you never need to send a variety of credentials. For example, say you are building a widget on your web site that will indicate how long it has been since you last tweeted. This will always be about you and need to use only your credentials. The CONSUMER_KEY and CONSUMER_SECRET values are provided by the service to identify your application. The OAUTH_TOKEN and OAUTH_TOKEN_SECRET represent the credentials of the user accessing the service. They may be determined and stored by your app in the first scenario above or they may be part of your application in the second. This all sounds a little complicated so an example will help. Before we get to that we need to get the values. Twitter provides a portal for this at https://apps.twitter.com/. If you login and select Create New App you will see a screen that looks like this: Here you provide the name of you application, a description, and a link to your web site. For our initial scripting purposes the values here don’t matter too much. There is a Callback URL value but we also don’t need this now and can leave this blank. Finally there are some terms and conditions to read and agree to. Once you have completed this form, press the Create your Twitter application button and you will see a screen that looks like this: If you click on the API Keys tab you will see something like this: Since we want our script to access Twitter using our account, we can click on the Create my access token button to generate the appropriate token values. You should see something like this: You may need to refresh to see your new access token. So now you have four strings: API key, API secret, Access token, and Access token secret. These map to the four values needed in the OAuth structure described in the code above. There are lots of different ways to access the Twitter API. Here I am simply going to use the user_timeline API to retrieve the 5 most recent tweets from my timeline. You can use this API to retrieve any user’s timeline that you have access to from your Twitter account (including, of course, all the public timelines). So here is th[...]