Subscribe: Geekswithblogs.net
http://www.geekswithblogs.net/MainFeed.aspx
Preview: Geekswithblogs.net

Geekswithblogs.net



Geekswithblogs.net



 



Tech User Groups in the Charlotte NC area

Sun, 15 Jan 2017 17:56:51 GMT

Originally posted on: http://geekswithblogs.net/kjones/archive/2017/01/15/229685.aspx

One of the best things you can do for your tech career is to get involved with local user groups.  I’ve been regularly attending some type of user group meetings since the early 2000s.  They’ve all been Microsoft focused: .NET development, Windows Server/Infrastructure, and SharePoint.

We moved to Charlotte almost seven years ago and I sought out the local users groups.  This blog post is to help anyone looking for info about the groups as I did back then.

Why do I attend user groups? Two primary reasons I guess: one, to learn something new.  Even if the topic is something that I consider to be familiar with, I always end up learning something new.  If it is a familiar topic, it’s good to see how others present it for the times that I need to explain it to others (I attended a OneNote presentation that fell into this category).

Second, it’s the networking.  Meeting others in the tech community, hearing what they’re working on, what challenges they’ve run into, is always good to hear.  One additional, huge benefit to the networking is for when it comes time to job search.

Anyway, here’s the list of groups that I know in the area:

Charlotte Area SharePoint User Group (CASUG) – Meets on the 3rd Thursday of each month at the Charlotte Microsoft offices. This group also  After attending for a few years, I volunteered to help organize it, so I’m a little partial to this group.

Enterprise Developers Guild – Meets on the 4th Tuesday of each month at the Charlotte Microsoft offices. This is one of the strongest groups in Charlotte, with good leadership and attendance.

Carolina IT Pro Group (CITPG) – Meets on the 3rd Monday of each month, most recently at the Charlotte Microsoft offices. This is one of the oldest groups in Charlotte (from what I understand) and has a little different meeting format than the others. 

Charlotte Office 365 User Group (MeetUp or Facebook) – A relatively new group that meets on the 4th Wednesday of the month, again at Microsoft.  They also started a business user series of meetings that meet during the day (lunch), but I’m not sure what the schedule pattern is for those.

Charlotte PowerShell Users Group – I haven’t attended in a while, but they seem to still be going strong.

Here’s some more, but I’ve never attended:

Charlotte SQL Server User Group

Modern Devs Charlotte (formerly JavaScript devs?)

CharlotteJS

If you’re familiar with these groups, please feel free to leave feedback in the comments.

(image) (image)



Alexa, make me a sandwich.

Thu, 12 Jan 2017 12:47:56 GMT

Originally posted on: http://geekswithblogs.net/cwilliams/archive/2017/01/12/229354.aspx

Just wrapped up a pretty beefy article on Alexa programming for the upcoming IoT issue of Code Magazine. That was a lot of fun, and has definitely rekindled my interest in IoT and Home Automation. I've been spending a lot of time working with SmartThings and Z-Wave devices, and my Amazon Echo lately. Getting them all talking to each other. Mostly everything "just works" with minimal configuration, which is nice, and I've been experimenting with responding to sensor input. I've got my gecko tank set up to switch lights at sunrise and sunset, and that was easy enough. The dining room light is programmed to come on at sunset, if nobody is detected as "home" so we don't have to come into an dark house. I've bought a few smart bulbs, and smart switches, so I can schedule those (and so I can turn off the lights if I forget, without getting out of bed. ;) I've already tagged all the kids (like a pack of wildebeests) so I know whenever they leave and arrive at the house on school days. I've been eyeballing the Nest Thermostats, but haven't tried installing one yet. Not sure how involved they are. When we built the house, we wired everything for Cat6, and I'm setting up PoE cameras. (image) (image)



JavaScript mess to cleaner code articles

Fri, 06 Jan 2017 11:48:55 GMT

Originally posted on: http://geekswithblogs.net/Aligned/archive/2017/01/06/javascript-mess-to-cleaner-code-articles.aspx

Please come visit me and read about how to move from a JavaScript mess to cleaner code. I have several steps and a few more to come. All the code is on Github (image) (image)



SQL Server: How can I tell if a table is being used?

Thu, 05 Jan 2017 21:15:33 GMT

Originally posted on: http://geekswithblogs.net/AskPaula/archive/2017/01/05/sql-server-how-can-i-tell-if-a-table-is.aspx

This statement will display the datetime stamp of the last user scan and the last user update. It will also include the # of user updates on that table.

SELECT OBJECT_NAME(OBJECT_ID) AS DatabaseName, last_user_update,*
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID( 'yourDatabaseName')
AND OBJECT_ID=OBJECT_ID('yourTableName')
(image) (image)



SQL Server: How do I test for the existence of a temp (#temp) table before dropping it?

Thu, 05 Jan 2017 21:00:43 GMT

Originally posted on: http://geekswithblogs.net/AskPaula/archive/2017/01/05/sql-server-how-do-i-test-for-the-existence-of.aspx

This statement will work 

if object_id('tempdb..#mytempTbl') is not null
  drop table #mytempTbl
(image) (image)



git, angular2, webstorm

Wed, 04 Jan 2017 17:48:33 GMT

Originally posted on: http://geekswithblogs.net/foxjazz/archive/2017/01/04/git-angular2-webstorm.aspx

npm install -g angular-cli

although I haven't figured out to implement navbar from bootstrap I created my own.
I enjoy the convenience and possibilities of angular2. There is nothing like it.

The tooling with webpack and the documentation is written well.
It takes some getting use to thinking in angular2, but after a few weeks of trying different things, it's all been worth it!

webstorm from jetbrains is also good stuff, except for the fact they call folders directories WTF is that all about? Folks it's a folder! it's easier to say, two syllables instead of 3, and shorter to spell.
Give it a rest, can't you damn programmers refactor, or at least give users a CHOICE!

Other than that, webstorm rocks. it's easy to navigate, and can run debug right within the web window with typescript ing angular2!

I spent the day mainly converting my systemjs angular2 app to webpack. And I gotta tell ya, it was worth every lick of time I spent. It's easier with webpack, it's more integrated, and less hassles.
I have spent many hours arguing with systemjs and gulp that I think I broke something. 
Now it's all cake, easy peasy.
 

(image) (image)



Mapper vs Mapper: The Performance Plot Thickens

Wed, 28 Dec 2016 05:09:58 GMT

Originally posted on: http://geekswithblogs.net/mrsteve/archive/2016/12/28/object-mapper-performance-comparison-allowpartiallytrustedcallers.aspxOk, first of all, I'm definitely going to write about something other than mapper performance soon. This is my third blog on the subject and I want to talk a bit about some unique AgileMapper features! But we've got new versions of AgileMapper, AutoMapper and Mapster - the latter including a fix for the bug I found writing my last blog on this subject - and some nuances to talk about. As this is my third update on this subject - and as it's Christmas time - here's the mappers and tests we're talking about, so you don't have to visit the previous blogs. You're welcome! :) The Mappers The mappers I'll be comparing are: AgileMapper My mapper project, now on version 0.9. AgileMapper focuses on ease of use, flexibility and transparency. AutoMapper You all already know about AutoMapper - it's AutoMapper! I'm now testing version 5.2.0, as well as version 4.2.1 as requested by a reader. ExpressMapper ExpressMapper  is a 'lightweight' mapper, first written as a faster alternative to the AutoMapper 4.x.x series. Mapster Mapster is another 'lightweight' mapper, written to be "kind of like AutoMapper, just simpler and way, way faster" (quoted from their NuGet page). Now on version 2.6.1, and the author has optimised its use in my tests. ValueInjecter ValueInjecter is written for flexibility, and supports unflattening as well as flattening. The Tests The performance test project is a console project based on the AutoMapper benchmark which performs each of the following, for each mapper, 1 million times: Constructor mapping - creating a POCO with a single constructor parameter from a POCO with a matching property Complex mapping - deep cloning a Foo POCO with various kinds of value type properties, multiply-recursive Foo, List and Foo[] properties, and IEnumerable and int[] properties Flattening - mapping from a POCO with nested POCO properties to a POCO with all value type (and string) properties Unflattening - mapping from a POCO with all value type (and string) properties to an object with nested POCO properties - only AgileMapper and ValueInjecter support this Deep mapping - mapping a POCO with nested POCO and POCO collection properties onto a differently-typed POCO with corresponding properties The Nuances I had a pull request to add the AllowPartiallyTrustedCallers attribute to the test project. As explained in this StackOverflow question, Funcs compiled from Expression trees are hosted in dynamically-created, partially-trusted assemblies; subsequent executions of these Funcs incur a security overhead. Applying AllowPartiallyTrustedCallers to the calling assembly causes part of the security checks to be skipped, which speeds things up. As you'd expect, that's not the whole story, though - assemblies marked with AllowPartiallyTrustedCallers can only call assemblies with compatible security settings. Applying it to the test project means it can't call ExpressMapper, Mapster, ValueInjecter or AutoMapper 4.2.1. So you can't just go around slapping AllowPartiallyTrustedCallers on everything then kick back to think what you'll do with all the execution time you've saved :) Results Time! Here's the updated results - as mentioned, the numbers are the total seconds required to perform 1 million iterations of each test. 'w/ APTC' is the time with AllowPartiallyTrustedCallers applied.   Constructor Complex Flattening Unflattening Deep Manual 0.00880 1.56607 0.05303 0.05021 0.47373 AgileMapper 0.9 0.15377 [...]



Can’t connect to Oracle XE after migrating to Windows 10

Tue, 27 Dec 2016 08:56:26 GMT

Originally posted on: http://geekswithblogs.net/BlueProbe/archive/2016/12/27/219143.aspx

1) After migrating from Win7 to Win10 I couldn’t connect to XE on localhost (127.0.0.1)

2) I did add XE to listeners.ora

3) LSTNRCTL status does not show service XEXDB in ready status.

Solution:

sqlplus / as sysdba

SQL> alter system set local_listener='(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521))' scope=both;

System altered.

SQL> alter system register;

System altered.

SQL> exit

http://stackoverflow.com/a/37129023/39542 – This post solved the question of changing an IP, which was the ticket.

(image) (image)



Apple iPhone is the best

Tue, 27 Dec 2016 08:43:46 GMT

Originally posted on: http://geekswithblogs.net/BlueProbe/archive/2016/12/27/219139.aspx

Unless you happen to Google {iPhone stuck saying searching}, then the 13.5 Million results returned may sway your opinion. So, you may want to think twice about that service plan. Apple Store only replaces screens and batteries, but will swap your iPhone 6 for $300.

(image) (image)



1,000 Year Countdown?

Fri, 23 Dec 2016 00:22:33 GMT

Originally posted on: http://geekswithblogs.net/SCebula/archive/2016/12/23/217058.aspx

This was one of the most uncomfortable articles that I read this year:


It's hard to ignore threats to mankind's existence:
  • Environmental changes (e.g. global warming) and subsequent effects (e.g. melting polar ice caps, ocean temperature changes)
  • Ourselves (e.g. ocean pollution in Trash Vortex, air pollution, war)
  • AI evolving into something like Skynet (from the Terminator movies).  Yeah, seemed like fantasy when the movies first came out but with recent technology, it now seems a bit less so.
Suggested solution in article is to look towards space for a new home.  Pursuit of life elsewhere will probably bring many positive discoveries.  Can we really expect to take care of a new home when we could not take care of our previous one?

Modern man has existed for over 200,000 years with civilizations spanning over 6,000 years.  If we have the intelligence to seek out new worlds, shouldn't we also be applying equal (or more) effort toward achieving a sustainable balance on Earth?

1,000 years seems a bit generous when you consider the potential threats. Do we really need to see the math and estimate how growing population and resource consumption will impact our world?  How about seeing the formula for how we should be living a sustainable existence now, preserving resources and ensuring our continued existence here? That knowledge would serve us well at whatever planets we decide to live.


(image) (image)



DAX Studio 2.6.0 downloading issues

Thu, 22 Dec 2016 03:06:22 GMT

Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2016/12/22/dax-studio-2.6.0-downloading-issues.aspx

UPDATE: Looks like the 2.6.0a release on codeplex is now being flagged by Chrome as malicious. I don't know if it's the file or codeplex.com that is the issue (our older releases with thousands of downloads appear to now be falgged as "malicious" too now). So I have also made the setup file available as a release from our github repo too - https://github.com/DaxStudio/DaxStudio/releases/download/2.6.0/DaxStudio_2_6_0a_setup.exe

If you had trouble dowloading the 2.6.0 release yesterday there is now a new 2.6.0a release up at https://daxstudio.codeplex.com/releases

Yesterday's release of DAX Studio 2.6.0 started off OK. We've had over 400 downloads, but then at some point the browsers seem to think the installer was a malicious file. We're not sure why, but all of them have started throwing up warnings, Chrome seems to be the worst offender, saying that the file is malicious and only giving you the option to discard it

(image)

And both Firefox and Edge give you very scary warning, but will let you download the file.

The advice on the Chrome "learn more" link is very generic and there does not appear to be any way of submitting for a re-assessment of this judgement.

I checked both the original file and then downloaded the file from codeplex and checked again (just in case something had infected the file after it was uploaded to codeplex) using Googles virustotal.com site and both times 0 out of 55 antivirus scanners reported any issues.

If you are curious below is a link to the VirusTotal.com results showing the detailed results:
https://virustotal.com/en/file/df634b42c7ce6027aeab4e90d83594c637d6d6e73acc9e89027ed34de1655d37/analysis/

(image) (image)



DAX Studio 2.6.0 Release

Wed, 21 Dec 2016 03:02:00 GMT

Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2016/12/21/dax-studio-2.6.0-release.aspxWe seem to somehow have gotten into the habit of doing pre-Christmas releases every year, so why break with tradition now :) The latest release of DAX Studio has a couple of new features as well as a bunch of small fixes. The biggest single feature is the support for multiple result sets. Both SSAS 2016 and Power BI support sending multiple EVALUATE statements in a single batch. You can now do the same thing in DAX Studio and we will generate a numbered tab for each result set. We've changed the Connection dialog so that connection types that are unavailable are just disabled instead of being hidden. There is also a little help icon with a tooltip which indicates why the particular option is disabled to help those that are new to DAX Studio. We've added the ability to connect to SSDT Integrated Workspaces. There is now a setting under File – Options to allow you to opt-in for notifications of pre-release builds. So when you launch DAX Studio if there is a new pre-release version available you will get a toast notification. I don't think we will always do a pre-release build, but there have been a number of times where it would have been nice to get a few more people testing out new functionality before doing the final release. When querying measures the Formatting from the data model is now applied. Note that to do this we look for matches between the column names in the result set and the measures in your model. So if you use functions like ADDCOLUMNS or SUMMARIZE you need to make sure to give the output columns the same name as the underlying measure if you want formatting applied. And there have been a bunch of minor enhancements: A link has been added to the dowload page from Help - About when a newer version is available. Added parsing of record counts in Server Timings for SQL 2016 / Power BI Desktop. Improved metadata search performance and the search box now stays open when it has keyboard focus. There are also a number of fixes in this release, some minor, but some also address some annoying crashes: Fixed an issue where Query Plans and Server Timings would not start when connected PowerPivot. Fixed an error when using the locale setting in the connection dialog Fixed an issue with hidden animations running causing background CPU usage even when the app was idle. Fixed crashes when refreshing metadata (also affects the automatic refresh when switching back to DAX Studio after editing your model). Fixed PowerPivot connections so that they stay connected when you open another Excel file. Fixed blank column headers in the results when running DMV queries Fixed file outputs, csv and tab had been switched [...]



Angular $broadcast vs. $emit - diffs, pros and cons

Tue, 20 Dec 2016 17:14:05 GMT

Originally posted on: http://geekswithblogs.net/renso/archive/2016/12/20/angular-broadcast-vs.-emit---diffs-pros-and-cons.aspxGoal: There always seems to be confusion over when to use $broadcast vs. $emit and can be hard to remember the scope affected by which one. Of course this will also have an impact on performance. Details: In either case, $broadcast or $emit, Angular uses $rootScope.$on and $scope.$on to listen to these events. Which one to use is up to the how far you want to expose the listener or subscriber to dispatched events. $rootScope $emit dispatches events upwards only by traversing the scope hierarchy from the bottom-up. Only $rootScope listeners will be notified of any changes since it is also the “root” scope. That means any $scope subscribers will not be notified. A simile of this would be when there are grandparents, parents and children in the room, the grandparents speak to each other but neither their children nor grandchildren here the conversation ~ old people only. $broadcast dispatches the event downwards to all child scopes, regardless of how deep it hierarchy is. This is a truly expensive exercise and the digest cannot be cancelled. As long as any object is $dirty, the digest lifecycle will continue to cycle, up to 10 times until no more changes are found. A simile of this would be when there are grandparents, parents and children in the room, the grandparents let everyone to come eat dinner, the message is intended for all in the room (everyone as in all scopes in the ng-app). $scope $emit dispatches events upwards only by traversing the scope hierarchy from the bottom-up. This is intended to dispatch events to parents, all the way to the eventual $rootScope. However, none of the other children get notified. Let’s say you have a room with great-grandparents, grandparents, parents and children, four generations, 4-level deep scope and one of the children wants to share a secret with all their parents, grandparents and finally great-grandparents ($rootScope). None of the other children heard a thing! $broadcast dispatches the event downwards to all child scopes, regardless of how deep it hierarchy is, but only to the $scope and not any of the $rootScope listeners. A simile would be when there are grandparents, parents and children in the room. Grandpa wants to share a Christmas gift idea with all of his children an children’s children, and so on, but he does not want grandma to know about it, it’s a surprise! Here is a visual diagram to try and explain the scope impact: Here is an example of $scope broadcasting and emitting and having one listener/subscriber listen to the event on $scope: // displatch the event upwards: $scope.$emit('someEvent', {        msg: ’some data object’ }); // dispatch the event downwards: $scope.$broadcast('someEvent', {       msg: ‘can be some complex object or function’ // string or complex objects }); // listen for the event in $scope: $scope.$on('someEvent', function (e, data) {       console.log(data.msg); // 'Data to send' }); In order to cancel an event from bubbling up when calling $emit, simply stop call stop propagation: $scope.$on('someEvent', function (e, data) {        e.stopPropagation(); }); Be careful with naming conventions when broadcasting or emitting, just as with global namespace pollution you could also pollute the pub-sub event namespaces. I recommend a simple namespace hierarchies where the root follows a class-name pattern(nouns) and the sub-level some action. For example, if you are going to share[...]



Testing from Open Live Writer

Tue, 20 Dec 2016 10:21:31 GMT

Originally posted on: http://geekswithblogs.net/AnneBougie/archive/2016/12/20/testing-from-open-live-writer.aspx

Lorem ipsum is a pseudo-Latin text used in web design, typography, layout, and printing in place of English to emphasise design elements over content. It's also called placeholder (or filler) text. It's a convenient tool for mock-ups. It helps to outline the visual elements of a document or presentation, eg typography, font, or layout. Lorem ipsum is mostly a part of a Latin text by the classical author and philosopher Cicero. Its words and letters have been changed by addition or removal, so to deliberately render its content nonsensical; it's not genuine, correct, or comprehensible Latin anymore. While lorem ipsum's still resembles classical Latin, it actually has no meaning whatsoever. As Cicero's text doesn't contain the letters K, W, or Z, alien to latin, these, and others are often inserted randomly to mimic the typographic appearence of European languages, as are digraphs not to be found in the original.

In a professional context it often happens that private or corporate clients corder a publication to be made and presented with the actual content still not being ready. Think of a news blog that's filled with content hourly on the day of going live. However, reviewers tend to be distracted by comprehensible content, say, a random text copied from a newspaper or the internet. The are likely to focus on the text, disregarding the layout and its elements. Besides, random text risks to be unintendedly humorous or offensive, an unacceptable risk in corporate environments. Lorem ipsum and its many variants have been employed since the early 1960ies, and quite likely since the sixteenth century.

(image) (image)



BigQuery QuickRef

Wed, 14 Dec 2016 21:33:15 GMT

Originally posted on: http://geekswithblogs.net/JoshReuben/archive/2016/12/15/bigquery-quickref.aspxBig Data keeps evolving. Stone Age Hadoop was alot of Java bolierplate for defining HDFS access, Mapper & Reducer. This was superceded by Bronze Age Spark, which provided a succint Scala unification of:ML pipelinesin-memory structured DataSets over RDDs via a SparkSession SQL APIDistributed Streams(Note: You can run such jobs easily in a dynamically scalable manner on Google Dataproc) Technology keeps evolving - the Big-Iron Age has arrived in the form of Google Cloud Platform's SPARK KILLER - a nextgen Big Data stack, consisting of:BigQuery - massively parallel, blazing fast https://cloud.google.com/blog/big-data/2016/01/anatomy-of-a-bigquery-query data analytics. This fetches from a column-store over a 1 Pb/s backbone and calculates aggregates over hundreds of thousands of VMs - you cant afford to scale Spark to such a distributed cluster!TensorFlow - Deep Learning framework (the biggest Machine Learning breakthrough of the decade) distributed and accelerated over custom 'TensorChips'DataFlow - (Apache Beam) a structured streaming windowing paradigm with a clean rich API that matches this metaphor - see https://cloud.google.com/dataflow/blog/dataflow-beam-and-spark-comparison - build up a transform graph connecting sources and sinks (many of these integration points come out of the box - eg gpubsub, bigquery)DataLab - Juypyter Notebooks with some extra IPython integration-convenience magic: %%sql , %%bigquery , %%storage, %%chart, %%mlalpha, %tensorboard, %%monitoringBigQuery has 4 components which you can read about here: https://cloud.google.com/blog/big-data/2016/01/bigquery-under-the-hoodBorg - Resource ManagerColossus - Distributed FSJupiter - 1 Pb/s networkDremel - query engine, open sourced as Apache DrillAnyhow, IMHO - Spark cannot compete, but I want to focus here on the BigQuery API. heres my QuickRef:bq CLIcancel - Request a job cancel and optionally waitsbq --nosync cancel job_id cp - Copies a tablebq cp dataset.old_table dataset2.new_table extract - extract source_table into Cloud Storage destination_uris.bq extract ds.summary gs://mybucket/summary.csv head - Displays rows in a table. params: --[no]job, --max_rows, --start_row, --[no]tablebq head -s 5 -n 10 dataset.table init - Authenticate and create a default .bigqueryrc file. ??insert - Insert JSON rows (from file or string) into a tablebq insert dataset.table /tmp/mydata.json echo '{"a":1, "b":2}' | bq insert dataset.table load - load source file / uris into destination_table, with optional json schema file / string bq load ds.new_tbl ./info.csv ./info_schema.json bq load ds.small gs://mybucket/small.csv name:integer,value:string ls - List objects contained in project or dataset. Flags: -j show jobs, -p, show all projectsbq ls mydataset bq ls --filter labels.color:red -a -j -p -n 1000 mk - Create a dataset, table or viewbq mk -d --data_location=EU new_dataset bq --dataset_id=new_dataset mk -t new_dataset.newtable name:integer,value:string bq mk --view='select 1 as num' new_dataset.newview mkdef - Emits JSON definition for a GCS backed table.bq mkdef 'gs://bucket/file.csv' field1:integer,field2:string partition - Copies source tables of the format into partitioned tables, with the date suffix of the source tables becoming the partition date of the destination table partitions.bq partition dataset1.sharded_ dataset2.partitioned_table query - Execute a querybq query 'select count(*) from publicdata:samples.shakespeare' rm - Delete dataset / table (-d / -t flags signify target type, -f force, -r remove all tables of a da[...]



Docker REST API Quickref

Wed, 14 Dec 2016 21:27:17 GMT

Originally posted on: http://geekswithblogs.net/JoshReuben/archive/2016/12/15/docker-rest-api-quickref.aspxThe Docker CLI commands actually encapsulate Rest calls to the host Docker Daemonhttps://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/The daemon listens on unix:///var/run/docker.sock - which you can curl into: curl --unix-socket /var/run/docker.sock http:/containers/jsonIn theory, a Container app could be Cluster-enabled via startup calls to the host Docker Remote REST API !POST /swarm/join POST /services/create Here's a summary of the exposed API surface:ContainersGET /containers/json - List containersPOST /containers/create - Create a containerGET /containers/(id or name)/json - Inspect a containerGET /containers/(id or name)/top - List processes running in a containerGET /containers/(id or name)/logs - Get container logsGET /containers/(id or name)/changes - Inspect changes on a container’s filesystemGET /containers/(id or name)/export - Export a containerGET /containers/(id or name)/stats - Get container stats based on resource usagePOST /containers/(id or name)/resize - Resize a container TTYPOST /containers/(id or name)/start - Start a containerPOST /containers/(id or name)/stop - Stop a containerPOST /containers/(id or name)/restart - Restart a containerPOST /containers/(id or name)/kill - Kill a containerPOST /containers/(id or name)/update - Update a container configPOST /containers/(id or name)/rename - Rename a containerPOST /containers/(id or name)/pause - Pause a containerPOST /containers/(id or name)/unpause - Unpause a containerPOST /containers/(id or name)/attach - Attach to a containerGET /containers/(id or name)/attach/ws - Attach to a container (websocket)POST /containers/(id or name)/wait - Wait for a containerDELETE /containers/(id or name) - Remove a containerHEAD /containers/(id or name)/archive - Retrieving information about files and folders in a containerGET /containers/(id or name)/archive - Get an archive of a filesystem resource in a containerPUT /containers/(id or name)/archive - Extract an archive of files or folders to a directory in a containerImagesGET /images/json - Build image from a DockerfilePOST /build - List ImagesPOST /images/create - Create an imageGET /images/(name)/json - Inspect an imageGET /images/(name)/history - Get the history of an imagePOST /images/(name)/push - Push an image on the registryPOST /images/(name)/tag - Tag an image into a repositoryDELETE /images/(name) - Remove an imageGET /images/search - Search imagesMiscPOST /auth - Check auth configurationGET /info - Display system-wide informationGET /version - Show the docker version informationGET /_ping - Ping the docker serverPOST /commit - Create a new image from a container’s changesGET /events - Monitor Docker’s eventsGET /images/(name)/get - Get a tarball containing all images in a repositoryGET /images/get - Get a tarball containing all imagesPOST /images/load - Load a tarball with a set of images and tags into dockerPOST /containers/(id or name)/exec - Exec CreatePOST /exec/(id)/start - Exec StartPOST /exec/(id)/resize - Exec ResizeGET /exec/(id)/json - Exec InspectVolumesGET /volumes - List volumesPOST /volumes/create - Create a volumeGET /volumes/(name) - Inspect a volumeDELETE /volumes/(name) - Remove a volumeNetworksGET /networks - List networksGET /networks/ - Inspect networkPOST /networks/create - Create a networkPOST /networks/(id)/connect - Connect a container to a networkPOST /networks/(id)/disconnect - Disconnect a container from a networkDELETE /networks/(id) - Remove a networkNodesGET /nodes - list nodesGET [...]



Consul QuickRef

Wed, 14 Dec 2016 21:24:18 GMT

Originally posted on: http://geekswithblogs.net/JoshReuben/archive/2016/12/15/consul-quickref.aspxHashiCorp Consul https://www.consul.io/ provides an easy to use, multi-region Service Discovery / Health-Check + distributed config keyval store. If you have ever hacked away at coding ZooKeeper to support distributed systems, Consul's Agent based architecture requires an order of magnitude less effort to roll out.Here's my Consul QuickRef:CLI options:agent - Runs a Consul agent-dev-server-config-dir=xxx-data-dir=xxx-bind=-bootstrap-expect=<#server-nodes>-name=-atlas-join, -atlas=, atlas-token=xxx-ui #start the ui on http://localhost:8500/ui.config-test - sanity test of config filesevent - Fire a new event (can be handled by a watch)-http-addr - of the agent-datacenter-name - of the event.-node / service/ tag - regex target filters-tokenexec - Executes a command on Consul nodes. same options as event +-wait, -wait-replforce-leave Forces a member of the cluster to enter the "left" stateinfo Provides debugging information for operatorsjoin Tell Consul agent to join clusterkeygen Generates a new encryption keyleave Gracefully leaves the Consul cluster and shuts downlock semaphore distributed lock in KV store-n #holders-name-pass-stdin-try, -monitor-retrymaint - mark a service provided by a node or the node as a whole as "under maintenance"-enable / disable-reason-servicemembers Lists the members of a Consul cluster:-detailed-status monitor Stream logs from a Consul agent-log-level reload Triggers the agent to reload configuration files version Prints the Consul version watchWatch for changes in Consul-http-addr , datacenter-token-key-name (Event)-passingonly=[true|false] - only passing services-prefix-service - Service to watch-state - Check state to filter on-tag - Service tag to filter on-type - Watch type. Required, one of "key", "keyprefix", "services", "nodes", "service", "checks", or "event".DNS:dig @127.0.0.1 -p 9600 .node.consul [tag.].service[.datacenter]. dig @127.0.0.1 -p 9600 ..service.consul SRV API:querystring params index=X-Consul-Index - block until match ? wait - timeout blocking consistent / stale - other consistency modes near passing curl localhost:8500/v1/acl/create : Creates a new token with a given policyupdate : Updates the policy of a tokendestroy/ : Destroys a given tokeninfo/ : Queries the policy of a given tokenclone/ : Creates a new token by cloning an existing tokenlist : Lists all the active tokens/v1/agent/checks : Returns the checks the local agent is managingservices : Returns the services the local agent is managingmembers : Returns the members as seen by the local serf agentself : Returns the local node configurationmaintenance : Manages node maintenance modejoin/ : Triggers the local agent to join a nodeforce-leave/> : Forces removal of a nodecheck/register : Registers a new local checkcheck/deregister/ : Deregisters a local checkcheck/pass/ : Marks a local check as passingcheck/warn/ : Marks a local check as warningcheck/fail/ : Marks a local check as criticalcheck/update/ : Updates a local checkservice/register : Registers a new local serviceservice/deregister/ : Deregisters a local serviceservice/maintenance/ : Manages service maintenance mode/v1/catalog/register : Registers a new node, service, or checkderegister : Deregisters a node, service, or checkdatacenters : Lists known datacentersnodes : Lists nodes in a given DCservices : Lists services in a given DCservice/ : Lists the nodes in a given servicenode/ : Lists the[...]



Wow, has it been that long... Really?

Tue, 13 Dec 2016 09:55:44 GMT

Originally posted on: http://geekswithblogs.net/AJWarnock/archive/2016/12/13/210760.aspx

Well, what do you do when you are traveling and attending a technical conference and another attendee or a speaker asks for your blog address?  Well, if you have ignored your blog in as long as I have you may find that you’re a bit out of touch, feel a bit embarrassed and you sheepishly give the old blog address with a promise to yourself to do better…

So here we go again, I don’t know if anyone even reads this anymore and if you did and commented without receiving a response, I apologize the email account was definitely way out of date and I received no notifications.  Life happens and sometimes throws us a few curve balls in the process that keep us from doing some things that we really enjoyed and hopefully others did too.

That being said… having returned refreshed, excited about new technologies and development practices and more from the last week, I cannot think of a better way of learning than starting a conversation about the many cool things going on in Software and Technology, so what’s it going to be next?

Hmm…
Database…
Architecture…
C#...
Cloud…
Web API’s
Unity…

Guess you will have to check back and see…
(image) (image)



First Go Lesson: Variable declaration and passing by reference

Tue, 13 Dec 2016 06:16:47 GMT

Originally posted on: http://geekswithblogs.net/jboyer/archive/2016/12/13/first-go-lesson-variable-declaration-and-passing-by-reference.aspxI beginning my journey with Go by taking the Go Fundamentals Pluralsight course. It’s great so far. I would love to share some interesting stuff I see about the Go language. Also, it helps me not to forget the nuances of the language and reassures me that I understand how things work. Variable declaration Declaring variables in Go is quite interesting. The var keyword is only used to declare variables that are package scoped. Package scope means that the variable can be used by any function inside the package (think module in Node world or namespace in C#). I’ve never really thought of that kind of scoping before. I’ll have to dig into that more as I continue to learn. The interesting thing is that if you are declaring and instantiating a variable in Go inside a function, you use the “:=” syntax (colon and equals) like this: Also, I find it interesting that Go will not compile if you declare a variable and not use it. That is a great way of making sure there is no waste in your programs. Passing parameters by reference Go has another interesting “feature”. It always passes function parameters by value. This means that a copy of the value is created and then passed in. Any changes made to the value of that parameter while inside the function will not change the original. So if you declare a variable in one function, pass it into another function that manipulates the value, then use that value after the second function is called, you will get the original value. Let’s say the code looks like this: The output when this code is run is the following: Notice how the course name does not change. I’m used to languages like C# where whether or not a parameter is passed by value is dependent on the type of the variable. For instance, strings are always passed by reference and ints are always passed by value. Here in this example, we see that even strings are always copied when passed into functions in Go. You have to use pointers in Go if you want to pass by reference and change the original variable. Notice the changes in the following code: Now when you run the code… The original value is changed. Cool stuff. If you are interested in learning more, please check out Nigel Poulton’s Pluralsight course: Go Fundamentals. These code samples came from his course. It’s great. [...]



Which Antivirus Installation Procedure Should Be Followed

Mon, 12 Dec 2016 23:53:37 GMT

Originally posted on: http://geekswithblogs.net/advsoftware/archive/2016/12/13/which-antivirus-installation-procedure-should-be-followed.aspxTechnology is a powerful thing and a thing of beauty which help people get their things done easily, simply, quickly and conveniently. But also technology has an evil side. When technologies get misused, it can become harmful for people. With the help of technology experts have invented computers. This is the most incredible thing which helps people to do their work very fast and simply. But also technology has invented viruses which attack computers and other devices to make that thing useless. Also with the help of technology experts have created anti-virus to protect computers. But sometimes users get reluctant about installing anti-virus software in their computer and this may lead their computers to get attacked by dangerous viruses. Often people leave their computers unprotected as they get confused which anti-virus software they should choose for their system. But they should know that there are several online tech support team such as avg online support team available and they can provide better guidance. If computers are giving trouble frequently, people should immediately install anti-virus software or should update the old one which already has been installed in their computer system. Also for some certain problems and to understand whether their computer is being attacked by the viruses, people can call on norton online support number. Also sometimes people may have to face difficulties while installing or updating anti-virus software. In that case they also can contact anti-virus online support team such as norton online support team to get better aids. Which steps should be followed in order to install or update anti-virus software without disturbance? The avg online support team can guide users how to install or update anti-virus software instantly and without any trouble. People need to follow these following steps to install anti-virus: Firstly users need to realize and need to collect information regarding various anti-viruses to determine which anti-virus can protect their computer accurately. They also can contact tech support team to know which anti-virus should be selected. Now they have to click on the name of their product to download the installation file on the software download page. Then they have to select “Run” option to start the installation process. Now some instruction will show on the computer screen and users need to follow them accordingly. If the anti-virus software has been purchased, the license number will be required to install that software. After completing the installation, user should restart their computer.  Now user can relax as their computer is well protected. [...]



I’m back after a long hiatus

Mon, 12 Dec 2016 21:09:26 GMT

Originally posted on: http://geekswithblogs.net/jboyer/archive/2016/12/12/irsquom-back-after-a-long-hiatus.aspx

It’s been some time since I’ve written to this blog. I guess life got in the way and at the time I didn’t feel that I was contributing enough to warrant the time taken to write up my articles. However, I have received views and comments on the blog complementing my writing style and encouraging me to keep writing.

Another factor is the fact that I have recently come back to read my old posts and actually liked what I read. I felt that I actually did have something to contribute, even if at the time I wasn’t convinced my pontificating would go very far.

At the very least, it is somewhat therapeutic to write these things down. I’ve learned so much in the past 5 years since I posted anything here and I feel that there is more to share. Not only that, but sometimes blogs become a great place to put stuff down that you don’t want to forget.

I don’t want to forget my initial thrust for this blog. It is called the Green Machine because I hoped I could use it to show that young and inexperienced developers do have something to give and can make great software. I was a young and inexperienced developer myself when I started it. Of course, I’m old and grizzled now (image) but I still have something to tell these young developers.

 

The truth is, we are all young developers in some capacity. I consider myself an accomplished C# and JavaScript developer. I now am embarking on a journey to learn Go. In terms of Go, I am a young and inexperienced programmer. Maybe that is a great upside of the software world. At any time, you can choose to reinvent yourself by learning a new language and trying to see where it can take you.

I’ll post my findings when it comes to learning Go. I hope you’ll find it as interesting as I do. I’ve also started the journey into application security, gaining my CompTIA Security+ certification. Be on the lookout for blog posts on security as well.

 

Hopefully, there will be some out there willing to bear my pontificating once more (image)

(image) (image)



13 modern ads that are even more sexist than their "twins" of the era of Mad Men

Sat, 10 Dec 2016 07:59:28 GMT

Originally posted on: http://geekswithblogs.net/geekwithblogs/archive/2016/12/10/13-modern-ads-that-are-even-more-sexist-than-their.aspxOnly hours after the start of the batch of episodes of Mad Men which will end the series, we look at some examples of modern advertising that are even more sexist and chauvinistic that many of the ads "Mad Men" era .We tend to think that times have changed, but seeing the following examples is inevitable to consider whether the current situation really is far from that of those years in which the ads were featuring happy homemakers who could not drive cars, but if used vacuum cleaners very happily.For though the ads were demeaning more ubiquitous in the 50s and 60s, their modern counterparts are not far behind. Then, hand Business Insider , memos compiled some notoriously sexist ads and have related to their vintage twins , and the fact is that the similarities are so striking as depressing.1. This old ad raised the concept of "walk over women" to the next level.But this modern announcement of Valentino not much different from previous, although at least the image of women as red carpet appears as a joke.2. This vintage ad says that if his wife has no domestic skills, can at least give beer.In this ad 2008 Stil vodka offered the men a trip to Russia to choose a Russian bride.3. This vintage ad was displayed a plug open ketchup so easy that even a woman could do it.This modern advertising technology uses the same concept.4. Women were at the feet of the men in the 60s.But in the 21st century still they are. At least in the vintage ad they have left the woman with the visible face ...5. In the 60 men practiced mind control through the smoke of snuff.This modern ad for a clothing brand has a more direct approach.6. This vintage ad celebrates the power of women to manipulate her husband about buying appliances.Decades later, this announcement maintains the idea that it is still the husband who pays for everything.7. In the last century, a washer-dryer was the perfect Valentine gift.But it seems that today's appliances are still the key to his heart.8. In the era of "Mad Men" advertising made sure that women know the importance of staying slim while doing household chores.But today weight loss remains the key domestic happiness.9. And then there were the car ads.At least today we have there is a reason why women crash cars: certainly, everything is the fault of mascara.10. In the past, part of the attraction of a car was if it was easy enough or not for a woman could drive it.Here BMW puts a woman impersonating a car to drive your partner.11. This announcement implicitly suggests that "Rosie" could have sex with five men ...While Dolce & Gabbana few years ago was on everyone's lips for representing a gang rape.12. In 1967, Drummond used a nude female model to sell menswear.And here Tom Ford does the same.13. But some things have changed. This old ad suggests that husbands are superficial.By contrast, here is a modern ad suggesting that women are superficial. Progress?Author Bio Penelope has a degree in Mass Communication from the Sheffield College, Sheffield, and she works for Dissertation Writing Service as an expert dissertation editor providing dissertation help to many students around the clock. She loves working with students and finds great pleasure to be associated with a platform which provides great support and enrichment to students that are in the toughest phase of education. Penelope works with[...]



Is it possible to measure feelings in social networks?

Sat, 10 Dec 2016 07:53:16 GMT

Originally posted on: http://geekswithblogs.net/geekwithblogs/archive/2016/12/10/is-it-possible-to-measure-feelings-in-social-networks.aspxWhen an emotional connection with something or someone is set, that link is becoming stronger and tends to be durable over time. This union if he moves to the relationship between consumer / brand gives the same condition. And it is that the feelings they create a connection between the client and characteristically effective brand, making the buyer is much less permeable to the impact that competition can generate. Therefore it is understood that the emotional aspect carries so much weight and has become a crucial part of the brand strategy .According to data collected by us Synapse , specialists in the development of Internet businesses today people use the television or newspapers to learn and then use networks to opine emotionally about what is happening. Therefore, the emotions that circulate on networks are a basic asset for brands." It is possible to generate a relationship between user and the most durable and effective by brand emotional content networks does not mean neglecting business objectives but on the contrary, establish a space for honest communication is power generating actions much more perfectly measurable impact on specific objectives , "added from the companyHowever, nowadays it is difficult to assess scientifically emotions in social networks , and analyze data communication structures and relationships are lost. Therefore, Big Data and analytics are the key behind the measurement of emotions in social networks.The plurality of fields of action that has the analytic is what gives the determinant integral and allows a certain brand stand out from the rest. This year, unlike 2015, new analytical practices are those that are allowing them to reach a different level in decision-making and business dynamics. Despite the difficulty in calculating there are some tools that help companies measure emotions, and among them are Addictomatic, Sentiment140, KLOUT or Meltwater, among others.Author Bio Jessica has a degree in Social Work from the Bristol University, and she works for Assignment Writing Service as an expert assignment editor providing assignment help to many students around the clock. She loves working with students and finds great pleasure to be associated with a platform which provides great support and enrichment to students that are in the toughest phase of education. Jessica works with one of the leading assignment writing services and is a great help for students who are looking for UK assignment writers. [...]



Social networks are the "darling" of the marks when launching new products

Sat, 10 Dec 2016 02:46:22 GMT

Originally posted on: http://geekswithblogs.net/geekwithblogs/archive/2016/12/10/social-networks-are-the-darling-of-the-marks-when-launching.aspxTraditional means "above the line" were once allies of the big brands when launching new products and services. Not so in the times in which social networks have become the first choice of advertisers for product launches (well ahead of the all - powerful television). According to a recent study conducted in the US, UK and Australia, 74% of marketers prioritize social networks when they launch new products and services. The second favorite method of marketers for product launches and services are the promotions of sales (55%), while the bronze medal dangle email marketing campaigns (53%). Traditional television advertising has to settle for sixth place, behind print advertising and public relations. The insignificance of the small screen in launching new products and services is consistent with a previous report of Five by Five concluding that only 6% of consumers includes television as a key factor in deciding whether they will to buy or not a new product. It is clear that social media are the "darling" of advertisers when launching new products and services , but what is what leads brands to blindly trust them? 64% of marketers use active listening in social networks to support the development of new products . Those most lay hold of "social listening" are the advertisers of the retail sector (72%) and the least financial services (51%). Moreover, 46% of marketers is convinced that one of the greatest benefits of 2.0 platforms is to generate "awareness" before launching a new product and service. When launching a new product or service, most of the brands always goes with the water neck. 70% of marketers said to have six months or less to prepare a product launch . And 85% said that the time between the moment the idea and the subsequent launch of the product has shrunk considerably over the past five years originates. However, despite the pressure of a temporary nature suffer marketers when launching new products and services, 88% is convinced that are currently in a position to make better informed decisions on product launches and a ratio Similarly, 87%, ensures that the product launch campaigns are easier to measure.Author Bio Penelope has a degree in Mass Communication from the Sheffield College, Sheffield, and she works for Dissertation Writing Service as an expert dissertation editor providing dissertation help to many students around the clock. She loves working with students and finds great pleasure to be associated with a platform which provides great support and enrichment to students that are in the toughest phase of education. Penelope works with one of the leading dissertation writing services and is a great help for students who are looking for dissertation proposal writing. [...]



How to Install Microsoft SQL Server on Ubuntu Linux

Thu, 08 Dec 2016 18:11:37 GMT

Originally posted on: http://geekswithblogs.net/JeremyMorgan/archive/2016/12/08/how-to-install-microsoft-sql-server-on-ubuntu-linux.aspxI must admit I was surprised when I learned that Microsoft SQL Server would be available in Linux. They've been pushing the open source initiative hard, but I didn't expect something this big. Oh yeah, Visual Studio is now available for Mac as well. I just saw a pig flying by. While MS-SQL is not open source they have made it available to run on open source platforms such as Linux and OSX, which I can imagine took a ton of work. So I decided to take advantage of this new option and try it out. It works great! It took 5 minutes to install. Here's how you can do it too. Note that you will need a server with 3.5 gigs of RAM for this. The first thing I always do on an Ubuntu machine is update itsudo apt-get updatesudo apt-get upgradeNext we need to import the public repository GPG keyscurl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.listNext we'll install SQL Server. sudo apt-get updatesudo apt-get install -y mssql-serverNow we need to run a configuration script to set up the server:sudo /opt/mssql/bin/sqlservr-setupIt will ask if you want to start the service and if you'd like to start it on boot. Here's how you can check if the service is running:systemctl status mssql-serverInstall the MSSQL Tools for LinuxTo test this out a little, install the MSSQL tools on Ubuntu. Add in a new repository:curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.listsudo apt-get update sudo apt-get install mssql-toolsNow, Let's try to connect! sqlcmd -S localhost -U SA -P ''You can run this command to view all your databases:SELECT Name from sys.Databases;GOThis should look pretty familiar to you if you've worked with SQL in the past. CREATE DATABASE acmewidgets;GONow we need to select that database:USE acmewidgets;GOAs a test. let's create a customer tableCREATE TABLE customer (id INT, firstname NVARCHAR(50), lastname NVARCHAR(50));GONow, let's put some customers in there:INSERT INTO customer VALUES (1, 'Lloyd', 'Christmas');INSERT INTO customer VALUES (2, 'Harry', 'Dunn');INSERT INTO customer VALUES (3, 'Mary', 'Swanson');GONow, let's take a look at those customers:SELECT * FROM customerGOAnd it's that easy! You can run SQL scripts here, or connect to it from SSMS, a traditional ASP site, or a .Net Core site/app. I'll be doing a lot of ASP.Net core work in the coming months, so be sure to check back here.To quit from SQL server, type in QUITAnd you're done!I'll be messing with this some more in the coming weeks and really putting it to the test, and I'll share my results. -JeremyI also did a YouTube tutorial for this article: width="560" height="315" src="https://www.youtube.com/embed/OqsOdUNsO4g" frameborder="0" allowfullscreen=""> Thanks!Are you an IIS Administrator? Do you want to be one? Check out my new  IIS Administration Fundamentals course at Pluralsight! - Jeremy [...]



Top 5 Frameworks for Mobile App Testing

Thu, 08 Dec 2016 07:29:54 GMT

Originally posted on: http://geekswithblogs.net/androidappdevelopers/archive/2016/12/08/top-5-frameworks-for-mobile-app-testing.aspxSmartphones have undoubtedly changed our lives. In this era of mobile application development, every mobile app developer is looking for amazing tools that help in developing a robust and powerful application. We all know that mobile applications have simplified our lives and helping us in our day-to-day activities. With the help of these powerful and smart apps, a person can do anything such as book movie tickets, watch online videos, play games, online transaction, etc.All these activities can be performed only when the mobile apps work smoothly. To offer smart functionalities and better user experience, mobile app developers have to look for new technologies and tools to achieve this performance. Many top mobile app developers are able to develop amazing mobile applications but, due to limited functionalities offered by mobile browsers, they are not able to test the mobile app, to check whether it is working properly or not. Testing is a crucial part of mobile application development process. It makes these applications run smoothly and function properly on smartphones. Testing allows a developer to check and view source codes, review the applications from several aspects such as user experience, functionality, user-interface, social network integration, etc. Hence, here I have compiled a list of frameworks that help in mobile app testing:1. Appium: Appium is an open-source cross-platform test automation framework for iOS and Android mobile applications. Being a cross-platform framework, it allows testing of native, hybrid and cross-platform mobile applications. It allows testers to write tests against several mobile platforms while using same application programming interface (API).With Appium, a user would be able to use test practices, frameworks and tools required for the testing of the mobile application. The codes can be re-used by the developers between iOS and Android test suites. It includes several client libraries such as Python, Java, JavaScript, Ruby, PHP, C#.2. Espresso: Espresso is an open-source mobile testing automation framework offered by Google for the testing of Android applications. It allows Android mobile application developers and testers to bring the best out of the mobile apps for Google Play Store. Espresso offers small and easy-to-learn API that is developed on top of the Android instrumentation framework.It allows the testers to write reliable Android UI and it supports API level 8 (Froyo), 10 (Gingerbread), 15 (Ice-Cream Sandwich) and all the later versions. It can be synchronized with the Android UI thread. It does not support web views for mobile apps.3. Robotium: Robotium is again Google’s open-source mobile app testing framework for native and hybrid Android applications. It has powerful and easy-to-write automatic black-box UI tests, where a user would be able to write test cases around functions and user acceptance test scenarios when handling several other Android activities. With just .apk, a tester can write test codes for the Android mobile applications.Robotium features run-time binding to user-interface (UI) components. It provides efficient test case execution. It also provides easy integration with Maven, Gradle and ANT.4. Appcelerator: Appcelerator is an open source Softw[...]



Neo4jClient–Bolt Edition

Wed, 07 Dec 2016 11:43:07 GMT

Originally posted on: http://geekswithblogs.net/cskardon/archive/2016/12/07/neo4jclientndashbolt-edition.aspx

Neo4j 3.x added a new transport protocol called ‘Bolt’ in essence a binary transmission protocol to speed up connectivity to the database, and with a series of official drivers – get everyone on an even keel.

Trouble is, changing from Neo4jClient to Neo4j-Driver is a big pain, for two main reasons:

1. You lose all the fluent setup you get with Neo4jClient (Cypher.Match(“”).Where(“”) etc)

2. You lose all the object mapping (.Return(u => u.As()))

Some people would be happy to just be able to add text and lose 1, but 2 is a big deal breaker. Plus, if you have any amount of code – swapping it out would just take a long time. I happen to have a relatively large codebase using Neo4jClient (Tournr) and I’m quite frankly not willing to swap out.

I’ve pushed a Bolt version to my MyGet location:

https://www.myget.org/feed/cskardon/package/nuget/Neo4jClient

It doesn’t have Transaction functionality (in either Full or Core .NET) for Bolt, but should have the Object mapping, please try it and let me know any problems.

Code wise - change:

new GraphClient(new Uri("http://localhost:7474/db/data"));

to

new BoltGraphClient(new Uri("bolt://localhost:7687"));

(image) (image)



How to use custom method from different namespace in Razor ?

Tue, 06 Dec 2016 21:36:37 GMT

Originally posted on: http://geekswithblogs.net/anirugu/archive/2016/12/07/how-to-use-custom-method-from-different-namespace-in-razor.aspx

For using method from custom library what you need is adding namespace in views/web.config

 

for example if you want to use linq methods you need to put something like this

 

 

 

When you open the web.config file you already see some namespace there. After add namespace you can use that method in any of your views.

 

if this still not work in VS 15 , Close the VS and Restart it. it should work.

(image) (image)



Moving On to AlignedDev.net

Fri, 02 Dec 2016 22:10:02 GMT

Originally posted on: http://geekswithblogs.net/Aligned/archive/2016/12/02/moving-on-to-aligneddev.net.aspx

I've been with GeeksWithBlogs.net since November 2011 with my first post "A quicker way to work with Resx files in Silverlight". It has been very good for me and my posts have gotten a lot more exposure than I would have on my own. Unfortunately, GWB hasn't upgraded their technology and I got the urge to buy a domain to create a "personal domain".

I'll be posting at http://www.aligneddev.net/ using Hugo. I hope you'll visit me there often!

Thanks for 5 great years of sharing and learning from each other. I look forward to many more to come.
(image) (image)



.Net Core App Error: Can not find runtime target for framework

Thu, 01 Dec 2016 08:43:17 GMT

Originally posted on: http://geekswithblogs.net/bconlon/archive/2016/12/01/.net-core-app-error-can-not-find-runtime-target-for.aspx

This has been annoying me for a while, but as usual the solution was simple. It turns out Visual Studio 2015 removes some key entries form project.json when you update Microsoft.NETCore.App using NuGet Package Manager.

Before updating to Microsoft.NETCore.App v1.1.0 project.json:

{
"version": "1.0.0-*",
"buildOptions": {
"emitEntryPoint": true
},

"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.1"
}
},

"frameworks": {
"netcoreapp1.0": {
"imports": "dnxcore50"
}
}
}

After updating to Microsoft.NETCore.App v1.1.0 project.json:

{
"version": "1.0.0-*",
"buildOptions": {
"emitEntryPoint": true
},

"dependencies": {
"Microsoft.NETCore.App": "1.1.0"
},

"frameworks": {
"netcoreapp1.0": {
"imports": "dnxcore50"
}
}
}

Which causes the error:

Can not find runtime target for framework '.NETCoreApp,Version=v1.0' compatible with one of the target runtimes: 'win10-x64, win81-x64, win8-x64, win7-x64'.

So to fix it just add the missing bits back into project.json:

{
"version": "1.0.0-*",
"buildOptions": {
"emitEntryPoint": true
},

"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.1.0"
}
},

"frameworks": {
"netcoreapp1.0": {
"imports": "dnxcore50"
}
}
}

# (image) (image)



Keyboard short-cut to get your lost Google Chrome tabs back

Wed, 30 Nov 2016 08:51:18 GMT

Originally posted on: http://geekswithblogs.net/renso/archive/2016/11/30/keyboard-short-cut-to-get-your-lost-google-chrome-tabs-back.aspx

Issues:

You just accidentally closed your Google Chrome browser, and oh boy, you lost all your open tabs and windows. Google Chrome has a really helpful short-cut to get them all back. You also have similar options in Firefox and other browsers but have to use their menu system to “Reopen closed tabs”.

The Solution:

In Google Chrome simply use this sort-cut keyboard hotkey:

Ctrl+Shift+T

If you’re not using Google Chrome, my question is: “Why not?” (image)

(image) (image)



Mapper vs Mapper: Performance Revisited

Tue, 29 Nov 2016 13:44:53 GMT

Originally posted on: http://geekswithblogs.net/mrsteve/archive/2016/11/29/object-mapper-performance-comparison-revisited.aspxUpdate: I have more up-to-date results than these, including updated versions of AgileMapper, AutoMapper and Mapster. I recently wrote a blog on the performance of various object-object mappers, but it turns out my tests weren't quite fair and equal at that point. Specifically: Creating Empty Collections vs... Not ExpressMapper, Mapster and ValueInjecter were leaving target collections as null if their source value was null, which means AgileMapper, AutoMapper and the manual mappers I wrote were creating millions (and millions) of collection objects the other three mappers weren't. No fair! Mapping Objects vs Copying Object References In some instances, Mapster and ValueInjecter were copying references to the source objects instead of mapping the objects themselves. The ValueInjecter usage was more my mistake than anything (its Mapper.Map(source) doesn't perform a deep clone by default), but I raised a bug for the behaviour in Mapster, and it's going to be fixed in an upcoming release. The Updated Results So I've updated the relevant mapper classes and re-measured. As before, these are the total number of seconds to perform one million mappings:   Constructor Complex Flattening Unflattening Deep Manual 0.00890 1.76716 0.05548 0.05300 0.50052 AgileMapper 0.8 0.17826 4.33683 0.38902 0.57726 1.15797 AutoMapper 5.1.1 0.15132 7.28190 0.36671 - 0.95540 ExpressMapper 1.8.3 0.21580 15.48691 ^ 0.52166 - 6.56550 ^ Mapster 2.5.0 0.12014 2.50889 * 0.29629 - 1.69947 ValueInjecter 3.1.1.3 0.47637 101.70602 ^ 12.67350 13.71370 28.05925 I've marked the major changes with a ^, and put a * next to Mapster's deep cloning result, because some of the objects in that test are having their reference copied instead of being cloned; it's likely to be significantly slower if that were not the case. Points to Note Mapster is still the fastest at churning out simple objects - it just gets slower when mapping nested object members ValueInjecter's reflection-heavy approach is very costly, but as I understand it that is how it's intended to be used - as before, I'm happy[...]



All you need to know about ASP.NET MVC

Mon, 28 Nov 2016 06:13:31 GMT

Originally posted on: http://geekswithblogs.net/Xicomtech/archive/2016/11/28/all-you-need-to-know-about-asp.net-mvc.aspxThe Model-View-Controller or MVC is an architectural design pattern that divides a web app into three main components: Model, View and Controller. The ASP.NET MVC framework acts as an alternative for the ASP.NET Web forms which were earlier used for creating web applications. The good thing about MVC is it's lightweight, presentable, and highly testable framework which comes pre-loaded with existing ASP.NET features like master pages & membership-based authentication. Well, MVC is a standard design pattern which is pretty much common about the developers. In a very less time, the .NET MVC development has benefited millions of small and big web applications. But still, plenty of internet apps depend on the traditional ASP.NET structure that has web forms and postbacks. With the right approach, you can quickly take your site to greater heights. When to Create an MVC Application Before you develop a web app, make sure you carefully select either the ASP.NET MVC framework or the ASP.NET web forms model. This latest MVC framework does not replace the web forms model' thus you have to settle down with the simple internet applications. In case, you already have an existing web forms-based applications then these will continue to work exactly the way they use to. It will be good that you weigh the advantages of each framework before deploying any one of them. Given below are the benefits of an MVC-Based web application:It becomes easier for the developers to manage the complexity of the app by dividing it into the model, the view, and the controller.ASP.NET MVC doesn't use view state or server-based forms which make the MVC ideal for .NET developers who want to gain full control over the behavior of an application.It uses the Front Controller patterns that quickly processes web app request via a single controller. It provides better support for test-driven development (TDD).The ASP.NET MVC structure works well with the apps which are backed up by the huge team of designers & developers. Advantages of a Web Forms-Based Web ApplicationIt properly supports the event model that preserves state over HTTP. These web-based forms offer dozens of events that support hundreds of server controls.Uses page controller pattern to add functionality to single pages. It utilizes view state on the server-based forms, which has made the information management very much easier. Suitable for apps which have a small team of developers and designers. In general, Web forms-based apps are less complex to develop, because the components are tightly integrated and require less code compared to the MVC model. ConclusionThe ASP.NET MVC framework is the latest addition to the web development technology and with the help of a reliable .NET developer or firm you can integrate all features of the ASP.NET MVC in your web application. [...]



Higlights

Sun, 27 Nov 2016 15:56:36 GMT

Originally posted on: http://geekswithblogs.net/MobileLOB/archive/2016/11/27/higlights.aspx

This weekend…..

 

I’ve realised the truth that Docker is the future.   Please take a few moments to inhale docker.com

 

I’ve built a MS SQL 2016, in Docker, connected to from another Container running Swift.  

(image) (image)



HTTP logging using CustomTraceListener from Enterprise library

Sun, 27 Nov 2016 06:12:13 GMT

Originally posted on: http://geekswithblogs.net/vaibhavgaikwad/archive/2016/11/27/202757.aspxThis was more like a debugging related topic for me. The overall knowledge is helpful to understand how the class CustomTraceListener can be useful to build you own tracing mechanism.My use-case was the trace all the HTTP requests in and out of my application. One way to do that was using HttpModule but as I never intended to do any re-routing or change in processing etc. I did not find using HttpModule was needed. I was looking for a more silent way of doing things on the background.So here it is:1. You need the have Enterprise library for logginghttps://www.nuget.org/packages/EnterpriseLibrary.Logging/2. For writing to file better use the log4net Develop a simple library project with the follwing code inside itusing log4net;using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;using Microsoft.Practices.EnterpriseLibrary.Logging;using Microsoft.Practices.EnterpriseLibrary.Logging.Configuration;using Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners;using System;using System.Collections.Generic;using System.Diagnostics;namespace TraceLib{    //[ConfigurationElementType(typeof(CustomTraceListenerData))]    public class DebugTraceListener : CustomTraceListener    {        private static readonly ILog log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);        public DebugTraceListener()            : base()        { }        public override void Write(string message)        {            log.Debug(message);        }        public override void WriteLine(string message)        {            log.Debug(message);        }        public override void TraceData(TraceEventCache eventCache, string source, TraceEventType eventType, int id, object data)        {            if (data is LogEntry && this.Formatter != null)            {                this.WriteLine(this.Formatter.Format(data as LogEntry));            }            else            {                this.WriteLine(data.ToString());            }        }    }}This will be your tracing code, build it and reference it your web-project like a normal "add reference" The configuration to add within your web.config is as follows:......                                                                                              Mostly your done, configure you log4net part in the web.config so that you can put the data to some file.     

Comments (0) - Read and Add your own Comments