Subscribe: Tug's Blog
http://tugdualgrall.blogspot.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
apache  application  article new  article  couchbase  create  data  mapr streams  mapr  mongodb  new  read article  streams 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Tug's Blog

Tug's Blog



A blog of technologies I am working on and interested in... MongoDB, Web, Java, Node, Mac and more...



Updated: 2017-11-21T07:18:36.042-08:00

 



Getting started with MapR-DB Table Replication

2017-08-08T01:16:16.096-07:00

Read & comment this article on my new blog Introduction MapR-DB Table Replication allows data to be replicated to another table that could be on on the same cluster or in another cluster. This is different from the automatic and intra-cluster replication that copies the data into different physical nodes for high availability and prevent data loss. This tutorial focuses on the



Getting Started With Kafka REST Proxy for MapR Streams

2017-08-08T01:15:53.043-07:00

Read & comment this article on my new blog Introduction MapR Ecosystem Package 2.0 (MEP) is coming with some new features related to MapR Streams: Kafka REST Proxy for MapR Streams provides a RESTful interface to MapR Streams and Kafka clusters to consume and product messages and to perform administrative operations. Kafka Connect for MapR Streams is a utility for streaming data



Getting Started with MQTT and Java

2017-01-04T04:08:36.338-08:00

Read & comment this article on my new blog MQTT (MQ Telemetry Transport) is a lightweight publish/subscribe messaging protocol. MQTT is used a lot in the Internet of Things applications, since it has been designed to run on remote locations with system with small footprint. The MQTT 3.1 is an OASIS standard, and you can find all the information at http://mqtt.org/ This article will



Getting started with Apache Flink and Kafka

2016-10-12T01:30:40.861-07:00

Read this article on my new blog Introduction Apache Flink is an open source platform for distributed stream and batch data processing. Flink is a streaming data flow engine with several APIs to create data streams oriented application. It is very common for Flink applications to use Apache Kafka for data input and output. This article will guide you into the steps to use Apache



Streaming Analytics in a Digitally Industrialized World

2016-10-11T20:32:09.290-07:00

Read this article on my new blog Get an introduction to streaming analytics, which allows you real-time insight from captured events and big data. There are applications across industries, from finance to wine making, though there are two primary challenges to be addressed. Did you know that a plane flying from Texas to London can generate 30 million data points per flight? As Jim



Setting up Spark Dynamic Allocation on MapR

2016-10-11T20:27:36.756-07:00

Read this article on my new blog Apache Spark can use various cluster manager to execute application (Stand Alone, YARN, Apache Mesos). When you install Apache Spark on MapR you can submit application in a Stand Alone mode or using YARN. This article focuses on YARN and Dynamic Allocation, a feature that lets Spark add or remove executors dynamically based on the workload. You can



Save MapR Streams messages into MapR DB JSON

2016-03-31T00:06:51.069-07:00

Read this article on my new blog In this article you will learn how to create a MapR Streams Consumer that saves all the messages into a MapR-DB JSON Table. Install and Run the sample MapR Streams application The steps to install and run the applications are the same as the one defined in the following article: MapR Streams application Once you have the default producer and



Getting Started with MapR Streams

2016-03-31T00:07:03.052-07:00

Read this article on my new blog You can find a new tutorial that explains how to deploy an Apache Kafka application to MapR Streams, the tutorial is available here: Getting Started with MapR Streams MapR Streams is a new distributed messaging system for streaming event data at scale, and it’s integrated into the MapR converged platform. MapR Streams uses the Apache Kafka API, so



Getting Started With Sample Programs for Apache Kafka 0.9

2016-03-30T02:03:49.991-07:00

Read this article on my new blog Ted Dunning and I have worked on a tutorial that explains how to write your first Kafka application. In this tutorial you will learn how to: Install and start Kafka Create and Run a producer and a consumer You can find the tutorial on the MapR blog: Getting Started with Sample Programs for Apache Kafka 0.9



Using Apache Drill REST API to Build ASCII Dashboard With Node

2015-12-10T02:59:39.065-08:00

Read this article on my new blog Apache Drill has a hidden gem: an easy to use REST interface. This API can be used to Query, Profile and Configure Drill engine. In this blog post I will explain how to use Drill REST API to create ascii dashboards using Blessed Contrib. The ASCII Dashboard looks like Prerequisites Node Apache Drill 1.2 For this post, you will use the SFO



Convert CSV file to Apache Parquet... with Drill

2015-08-18T07:44:00.113-07:00

Read this article on my new blog A very common use case when working with Hadoop is to store and query simple files (CSV, TSV, ...); then to get better performance and efficient storage convert these files into more efficient format, for example Apache Parquet. Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem. Apache Parquet has the following



Apache Drill : How to Create a New Function?

2015-07-21T10:04:06.678-07:00

Read this article on my new blog Apache Drill allows users to explore any type of data using ANSI SQL. This is great, but Drill goes even further than that and allows you to create custom functions to extend the query engine. These custom functions have all the performance of any of the Drill primitive operations, but allowing that performance makes writing these functions a little trickier



Introduction to MongoDB Security

2015-02-04T10:12:44.157-08:00

View it on my new blog Last week at the Paris MUG, I had a quick chat about security and MongoDB, and I have decided to create this post that explains how to configure out of the box security available in MongoDB. You can find all information about MongoDB Security in following documentation chapter: http://docs.mongodb.org/manual/security/ In this post, I won't go into the detail about



Moving My Beers From Couchbase to MongoDB

2015-02-01T20:31:44.813-08:00

See it on my new blog : here Few days ago I have posted a joke on Twitter Moving my Java from Couchbase to MongoDB pic.twitter.com/Wnn3pXfMGi — Tugdual Grall (@tgrall) January 26, 2015 So I decided to move it from a simple picture to a real project. Let’s look at the two phases of this so called project: Moving the data from Couchbase to MongoDB Updating the application code to use



Everybody Says “Hackathon”!

2015-01-23T07:04:32.593-08:00

TLTR: MongoDB & Sage organized an internal Hackathon We use the new X3 Platform based on MongoDB, Node and HTML to add cool features to the ERP This shows that “any” enterprise can (should) do it to: look differently at software development build strong team spirit have fun! Introduction I have like many of you participated to multiple Hackathons where developers, designer and



Nantes MUG : Event #2

2015-01-23T06:16:06.339-08:00

Last night the Nantes MUG (MongoDB Users Group) had its second event. More than 45 people signed up and joined us at the Epitech school (thanks for this!).  We were lucky to have 2 talks from local community members: How “MyScript Cloud” uses MongoDB by Mathieu Ruellan Aggregation Framework by Sebastien Prunier How “MyScript Cloud” uses MongoDB First of all, if you do not know MyScript I



How to create a pub/sub application with MongoDB ? Introduction

2015-01-12T07:30:01.687-08:00

In this article we will see how to create a pub/sub application (messaging, chat, notification), and this fully based on MongoDB (without any message broker like RabbitMQ, JMS, ... ). So, what needs to be done to achieve such thing: an application "publish" a message. In our case, we simply save a document into MongoDB another application, or thread, subscribe to these events and will received



Big Data... Is Hadoop the good way to start?

2014-11-25T07:27:45.098-08:00

In the past 2 years, I have met many developers, architects that are working on “big data” projects. This sounds amazing, but quite often the truth is not that amazing. TL;TR You believe that you have a big data project? Do not start with the installation of an Hadoop Cluster -- the "how" Start to talk to business people to understand their problem -- the "why" Understand the data you must



Introduction to MongoDB Geospatial feature

2014-08-21T14:30:00.032-07:00

This post is a quick and simple introduction to Geospatial feature of MongoDB 2.6 using simple dataset and queries. Storing Geospatial Informations As you know you can store any type of data, but if you want to query them you need to use some coordinates, and create index on them. MongoDB supports three types of indexes for GeoSpatial queries: 2d Index : uses simple coordinate (longitude,



db.person.find( { "role" : "DBA" } )

2014-03-29T04:21:04.719-07:00

Wow! it has been a while since I posted something on my blog post. I have been very busy, moving to MongoDB, learning, learning, learning…finally I can breath a little and answer some questions. Last week I have been helping my colleague Norberto to deliver a MongoDB Essentials Training in Paris. This was a very nice experience, and I am impatient to deliver it on my own. I was happy to see that



Pagination with Couchbase

2013-10-08T06:28:47.023-07:00

If you have to deal with a large number of documents when doing queries against a Couchbase cluster it is important to use pagination to get rows by page. You can find some information in the documentation in the chapter "Pagination", but I want to go in more details and sample code in this article. For this example I will start by creating a simple view based on the beer-sample dataset, the



How to implement Document Versioning with Couchbase

2013-07-18T06:59:23.568-07:00

Introduction Developers are often asking me how to "version" documents with Couchbase 2.0. The short answer is: the clients and server do not expose such feature, but it is quite easy to implement. In this article I will use a basic approach, and you will be able to extend it depending of your business requirements.  Design The first thing to do is to select how to "store/organize" the



Deploy your Node/Couchbase application to the cloud with Clever Cloud

2013-07-11T05:47:50.475-07:00

Introduction Clever Cloud is the first PaaS to provide Couchbase as a service allowing developers to run applications in a fully managed environment. This article shows how to deploy an existing application to Clever Cloud. I am using a very simple Node application that I have documented in a previous article: “Easy application development with Couchbase, Angular and Node”. Clever



SQL to NoSQL : Copy your data from MySQL to Couchbase

2013-07-08T05:51:27.944-07:00

TL;DR: Look at the project on Github. Introduction During my last interactions with the Couchbase community, I had the question how can I easily import my data from my current database into Couchbase. And my answer was always the same: Take an ETL such as Talend to do it Just write a small program to copy the data from your RDBMS to Couchbase... So I have written this small program that



Create a Couchbase cluster in less than a minute with Ansible

2013-05-31T13:21:23.562-07:00

TL;DR: Look at the Couchbase Ansible Playbook on my Github. Introduction   When I was looking for a more effective way to create my cluster I asked some sysadmins which tools I should use to do it. The answer I got during OSDC was not Puppet, nor Chef, but was Ansible. This article shows you how you can easily configure and create a Couchbase cluster deployed and many linux boxes...and the