Subscribe: TechMate
Added By: Feedage Forager Feedage Grade B rated
Language: English
clamav  command  fail ban  file  install  log  nagios  ruby  sar  server  system  tar  usr local  var log  var  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: TechMate


Thoughts Of A Linux Admin.......

Updated: 2018-03-06T19:02:05.983+05:30


Sysstat(sar) for UNIX / Linux - Best utility for Server Performance Monitoring


Sar (System Activity Reporter) is a command that ships with the sysstat package. Sysstat is a collection of Unix tools used for performance monitoring, the package includes tools such as iostat, mpstat, pidstat, sadf and sar.Along with the real time commands sysstat will install a cronjob that will run every 10 minutes and collect the systems performance information. Sar is the command you can use to read the collected information.You can monitor the following Linux performance statistics using sar.Collective CPU usageIndividual CPU statisticsMemory used and availableSwap space used and availableOverall I/O activities of the systemIndividual device I/O activitiesContext switch statisticsRun queue and load average dataNetwork statisticsReport sar data from a specific timeInstallation of  Sysstat==================For Debian based systems---------------------------------- apt-get install sysstatFor RPM based systems-------------------------------yum install sysstat(or)rpm -ivh sysstat-10.0.0-1.i586.rpmInstall sysstat from source=====================wget tar xvfj sysstat-10.0.0.tar.bz2 cd sysstat-10.0.0 ./configure --enable-install-cron After the ./configure, install it as shown below.makemake installSar Usages ========CPU Usage of ALL CPUs -> sar -usar -u Displays CPU usage for the current day that was collected until that point.sar -u 1 3 Displays real time CPU usage every 1 second for 3 times.sar -u ALL Same as “sar -u” but displays additional fields.sar -u ALL 1 3 Same as “sar -u 1 3″ but displays additional fields.sar -u -f /var/log/sa/sa10 Displays CPU usage for the 10day of the month from the sa10 file.CPU Usage of Individual CPU or Core (sar -P)eg: sar -P ALLsar -P ALL Displays CPU usage broken down by all cores for the current day.sar -P ALL 1 3 Displays real time CPU usage for ALL cores every 1 second for 3 times (broken down by all cores).sar -P 1 Displays CPU usage for core number 1 for the current day.sar -P 1 1 3 Displays real time CPU usage for core number 1, every 1 second for 3 times.sar -P ALL -f /var/log/sa/sa10 Displays CPU usage broken down by all cores for the 10day day of the month from sa10 file.Memory Free and Used (sar -r)sar -rsar -r 1 3sar -r -f /var/log/sa/sa10Swap Space Used (sar -S)sar -Ssar -S 1 3sar -S -f /var/log/sa/sa10Overall I/O Activities (sar -b)sar -bsar -b 1 3sar -b -f /var/log/sa/sa10Individual Block Device I/O Activities (sar -d)sar -dsar -d 1 3sar -d -f /var/log/sa/sa10sar -p -dDisplay context switch per second (sar -w)sar -wsar -w 1 3sar -w -f /var/log/sa/sa10Reports run queue and load average (sar -q)sar -qsar -q 1 3sar -q -f /var/log/sa/sa10Report network statistics (sar -n)sar -n KEYWORDKEYWORD can be one of the following:DEV – Displays network devices vital statistics for eth0, eth1, etc.,EDEV – Display network device failure statisticsNFS – Displays NFS client activitiesNFSD – Displays NFS server activitiesSOCK – Displays sockets in use for IPv4IP – Displays IPv4 network trafficEIP – Displays IPv4 network errorsICMP – Displays ICMPv4 network trafficEICMP – Displays ICMPv4 network errorsTCP – Displays TCPv4 network trafficETCP – Displays TCPv4 network errorsUDP – Displays UDPv4 network trafficSOCK6, IP6, EIP6, ICMP6, UDP6 are for IPv6ALL – This displays all of the above information. The output will be very long.[...]

How to check if a IP is blocked from Iptables............



How to check if IP is blocked from Iptables

Check if IP is blocked:

# iptables -L -n --line | grep [IP Address]

If IP appear as DROP or REJECT, the IP has been blocked

Unblock the IP Address:

# iptables -I INPUT -s [IP Address] -j ACCEPT

Blocking back an IP Address:

# iptables -A INPUT -d [IP Address] -j DROP

How to fix the date issue in VPS.....


Issue:root@server [~]# date -s "13 Nov 2012 16:01:00" date: cannot set date: Operation not permitted Fix: To correct this problem, exit from the container (#exit) and issue the following commands. # vzctl stop # vzctl set –save –capability sys_time:on # vzctl start # vzctl enter Not necessary # mv /etc/localtime /etc/localtime.old # ln -s /usr/share/zoneinfo/America/Los_Angeles /etc/localtime Now you can set the date and time appropriately.[...]

What is Virtuozzo...


Virtuozzo is a software application for enterprise server virtualization that allows an administrator to create virtual environments on a host computer at the operating system (OS) layer.Instead of having one physical machine run multiple operating systems simultaneously, as the virtual machine model used by VMware and Microsoft Virtual Server or XEN does, Virtuozzo approaches virtualization by running a single OS kernel as its core and exporting that core functionality to various partitions on the host. Each of the partitions effectively becomes a stand-alone entity called a virtual private server (VPS).Virtuozzo comes with a proprietary Kernel Service Abstraction Layer (KSAL) that manages access to the kernel and prevents any single VPS from bringing the entire physical server down. It also has a proprietary file system to completely isolate the partitions, a security precaution that prevents a software fault in one partition from impacting an application or data in another partition.Every VPS has its own network address and its own set of login credentials, system processes and daemon services. Because the underlying operating system is always running, each VPS can be rebooted independently and data can be migrated from one virtual environment to another on a live host. This capacity -- which is not possible with the virtual machine model -- is one reason why Virtuozzo has the reputation for being a good choice for production servers working with live data and applications.Virtuozzo's architecture eliminates system calls between layers, which reduces CPU usage overhead dramatically. Common binaries and libraries on the same host machine can be shared, making it possible for administrators to put literally thousands of VPSs on the same machine, each functioning as a stand-alone server.Administrators using Virtuozzo have three management options: the command line, Virtual Management Center (a GUI interface for management and monitoring) and Virtual Control Center (a Web-based interface for remote access).Virtuozzo is created and distributed by SWSoft. In 2006, SWsoft released the core of Virtuozzo under the GNU GPL (General Public License) in an open-source project called OpenVZ.Commands=========vzctlUtility to control Containers.vzlistUtility to view a list of Containers existing on the Node with additional information.vzquotaUtility to control Virtuozzo Containers disk quotas.Licensing utilities allow you to install a new license, view the license state, generate a license request for a new license:vzlicviewUtility to display the Virtuozzo license status and parameters.vzlicloadUtility to manage Virtuozzo licenses on the Hardware Node.vzlicupdateUtility to activate the Virtuozzo Containers installation, update the Virtuozzo licenses installed on the Hardware Node, or transfer the Virtuozzo license from the Source Node to the Destination Node.Container migration tools allow to migrate Containers between Hardware Nodes or within one Hardware Node:vzmigrateUtility for migrating Containers from one Hardware Node to another.vzmlocalUtility for the local cloning or moving of the Containers.vzp2vUtility to migrate a physical server to a Container on the Node.vzv2pUtility to migrate a Container to a physical server.Container backup utilities allow to back up and restore the Container private areas, configuration files, action scripts, and quota information:vzbackupUtility to back up Containers.vzrestoreUtility to restore backed up Containers.vzabackupUtility to back up Hardware Nodes and their Containers. As distinct fromvzbackup, this utility requires the Parallels Agent software for its functioning.vzarestoreUtility to restore backed up Hardware Nodes and Containers. As distinct from vzrestore, this utility requires the Parallels Agent software for its functioning.Template management tools allow the template creation, maintenance and installation of applications into a Container:vzpkgUtility to manage OS and [...]

What is cPanel & Plesk...


cPanel========cPanel is a Unix based web hosting control panel that provides a graphical interface and automation tools designed to simplify the process of hosting a web site. cPanel utilizes a 3 tier structure that provides capabilities for administrators, resellers, and end-user website owners to control the various aspects of website and server administration through a standard web browser.In addition to the GUI interface, cPanel also has command line and API-based access that allows third party software vendors, web hosting organizations, and developers to automate standard system administration processes.cPanel is designed to function either as a dedicated server or virtual private server. The latest cPanel version supports installation on CentOS, Red Hat Enterprise Linux (RHEL), and CloudLinux.cPanel 11.34 is the last major version to support FreeBSD.Application-based support includes Apache, PHP, MySQL, PostgreSQL, Perl, and BIND (DNS). Email based support includes POP3,IMAP, SMTP services. cPanel is commonly accessed on port 2082, with an SSL-secured server operating on port 2083.Once installed, cPanel cannot be removed (without extreme difficulty). The server must be formatted, and the operating system reinstalled. Similarly, it should only be installed on a freshly installed operating system with minimal prior configuration.WHM(WebHost Manager)====================WebHost Manager (WHM) is a web-based tool used by server administrators and resellers to manage hosting accounts on a web server. WHM listens on ports 2086 and 2087 by default.As well as being accessible by the root administrator, WHM is also accessible to users with reseller privileges. Reseller users of cPanel have a smaller set of features than the root user, generally limited by the server administrator, to features which they determine will affect their customers' accounts rather than the server as a whole. From WHM, the server administrator can perform maintenance operations such as compile Apache and upgrade RPMs installed on the system.       WHM Login WHM Home   cPanel User Login      Plesk===== The Parallels Plesk Panel (ex: Parallels Plesk Control Panel, Plesk Server Administrator, PSA, or just Plesk) software package is a commercial web hosting automation program. Originally released under the U.S. company Plesk Inc. and designed in Novosibirsk, Russia, Plesk was acquired by SWSoft in July 2003.SWSoft renamed themselves under the Parallels name (a brand which had been acquired by SWSoft) in 2008.Parallels Plesk Panel allows a server administrator to set up new websites, reseller accounts, email accounts, and DNS entries through a web-based interface. The administrator can create client and site templates, which predetermine resource-allocation parameters for the domains and/or clients.Parallels Plesk Panel for Linux/Unix supports multiple POSIX platforms, including Debian, Fedora, FreeBSD, Red Hat Linux, SUSE and Ubuntu. Parallels Plesk Panel for Windows supports Windows Server 2003 and Windows Server 2008 operating systems.Parallels Plesk Panel installs custom versions of or manages versions of MySQL and PostgreSQL databases (Microsoft SQL Server and Microsoft SQL Server Desktop Engine under Windows), Apache Tomcat Java platform server, and ColdFusion server. The latest plesk panel is 11.Plesk Login Plesk Home ===========================================[...]

Install and Configure Fail2Ban on Centos | RedHat


Fail2ban scans log files (e.g. /var/log/apache/error_log) and bans IPs that show the malicious signs -- too many password failures, seeking for exploits, etc. Generally Fail2Ban then used to update firewall rules to reject the IP addresses for a specified amount of time, although any arbitrary other action (e.g. sending an email, or ejecting CD-ROM tray) could also be configured. Out of the box Fail2Ban comes with filters for various services (apache, courier, ssh, etc).Steps===========================================1. wget tar -xjvf fail2ban-0.8.1.tar.bz23. cd fail2ban-0.8.14. python install5. vi /etc/fail2ban/jail.conf Enable only the sections you need and do them one at a time. We enable SSH and ProFTP (both use /var/log/secure) as well as Postfix.Set your local networks and any other networks you consider 'safe'. You certainly don't want to block your own clients!ignoreip = files/redhat-initd /etc/init.d/fail2banchkconfig --add fail2banchkconfig fail2ban onservice fail2ban startToolsShow failed SSH logins by date:cat /var/log/secure* | grep 'Failed password' | grep sshd | awk '{print $1,$2}' | sort | uniq -cSearch for correct log file:grep such /var/log/messages*grep ftp /var/log/messages*grep -r NOQUEUE /var/logThis should match Postfix bans:grep rejected /var/log/maillogConfigurationAdjust the following sample configuration files to your needs. # Fail2Ban jail.local configuration file################################################# The DEFAULT allows a global definition of the options. They can be overridden# in each jail afterwards.[DEFAULT]# ignore our IP rangesignoreip = # "bantime" is the number of seconds that a host is banned.bantime = 600# A host is banned if it has generated "maxretry" during the last "findtime"# seconds.findtime = 600# "maxretry" is the number of failures before a host get banned.maxretry = 3# Don't know how well other backend options work.backend = polling[ssh-iptables]enabled = truefilter = sshdaction = iptables[name=SSH, port=ssh, protocol=tcp]sendmail-whois[name=SSH, dest=, sender=]logpath = /var/log/securemaxretry = 3[proftpd-iptables]enabled = true filter = proftpdaction = iptables[name=ProFTPD, port=ftp, protocol=tcp]sendmail-whois[name=ProFTPD,, sender=]logpath = /var/log/securemaxretry = 3[postfix]enabled = truefilter = postfixaction = iptables[name=Postfix, port=smtp, protocol=tcp]sendmail-whois[name=Postfix,, sender=]logpath = /var/log/maillogmaxretry = 5# Fail2Ban filter.d/postfix.local configuration file#################################################[Definition]failregex = reject: RCPT from (.*)\[\]: 554reject: RCPT from (.*)\[\]: 550reject: RCPT from (.*)\[\]: 450ignoreregex = # Fail2Ban action.d/sendmail-whois.local configuration file#################################################[Definition]actionstart = echo -en "Subject: [Fail2Ban] : startedFrom: Fail2Ban <>To: \nHi,\nThe jail has been started successfully.\nRegards,\nFail2Ban" | /usr/sbin/sendmail -f actionstop = echo -en "Subject: [Fail2Ban] : stoppedFrom: Fail2Ban <>To: \nHi,\nThe jail has been stopped.\nRegards,\nFail2Ban" | /usr/sbin/sendmail -f actioncheck = actionban = echo -en "Subject: [Fail2Ban] : banned From: Fail2Ban <>To: \nHi,\nThe IP has just been banned by Fail2Ban afterattempts against .\n\nHere are more information about :\n`/usr/bin/dig -x `\nRegards,\nFail2Ban" | /usr/sbin/sendmail -f actionunban = [Init]name = defaultdest = rootsender = fail2ban==========================================================[...]

How to install and use Clam Antivirus in Linux(RPM Based)


ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats. It is the de facto standard for mail gateway scanning. It provides a high performance mutli-threaded scanning daemon, command line utilities for on demand file scanning, and an intelligent tool for automatic signature updates. The core ClamAV library provides numerous file format detection mechanisms, file unpacking support, archive support, and multiple signature languages for detecting threats. Steps for installation 1. groupadd clamav2. useradd -g clamav clamav3. mkdir /var/clamav 4. chown clamav:root /var/clamav 5. mkdir /var/log/clamav/ 6. chown clamav:root /var/log/clamav/ 7. mkdir /usr/local/share/clamav 8. chown clamav:clamav /usr/local/share/clamav9. wget tar xzvf clamav-0.93.3.tar.gz11. cd clamav-0.93.3 12. ./configure –disable-clamuko –enable-milter –with-dbdir=/usr/local/share/clamav13. Basically, on distributions Red Hat based, when trying to compile clamav we see an error from incompatibility with zlib. You have the choise to install both zlib and zlib-devel packages with yum install zlib zlib-develIf there are erros, like configure: error: The installed zlib version may contain a security bug. Please upgrade to 1.2.2 or later: You can omit this check with –disable-zlib-vcheck but DO NOT REPORT any stability issues then! we write the command:14.yum update zlib zlib-devel15.The chance to get error still exist so it is safe to run:   ./configure –disable-clamuko –enable-milter –with-dbdir=/usr/local /share/clamav –disable-zlib-vcheckAfter we run, for both cases:16. make17. make install18.We need a file named clamav.conf. We edit it in /etc:vi /etc/clamav.confWrite the following lines: #/etc/clamav.conf LogTimeLogSyslogLogFile /var/log/clam/clamd.logPidFile /var/run/clam/clamd.pidLocalSocket /var/run/clam/clamd.sockFixStaleSocketMaxThreads 50ThreadTimeout 600MaxDirectoryRecursion 15FollowFileSymlinksSelfCheck 600User clamavScanMailScanArchiveArchiveMaxFileSize 10M ArchiveMaxRecursion 5ArchiveMaxFiles 1000Save and close the file.19. Now tell your startup script to load the ClamAV daemon:echo “/usr/local/sbin/clamd” >> /etc/rc.d/rc.local echo “/usr/local/sbin/clamav-milter -l -o -q /var/milter/clmilter.sock” >> /etc/rc.d/rc.local20. cp /etc/clamav.conf /usr/local/etc/ 21. touch /var/log/clam-update.log22. chown clamav:clamav /var/log/clam-update.log 23. touch /tmp/clamd.log24. chown clamav:root /tmp/clamd.log25. mkdir /var/milter26. chown clamav:root /var/milter/27. cd /usr/local/etc/We can modify some configuration files in the directory /etc:28. cd /etc29. wget wget mkdir /var/lib/clamav32. chown clamav:root /var/lib/clamav/  /usr/local/bin/freshclam -l /var/log/clam-update.log33. cp /usr/local/sbin/clamd /etc/init.d/ 34./etc/init.d/clamd restartUsage  clamscan -irv your desired location p, li { white-space: pre-wrap; }  Eg: clamscan -irv /homeIf you need scan results into a file, try this command clamscan -irv location > file nameEg: clamscan -irv /home > scan.log p, li { white-space: pre-wrap[...]

How to install and use Rkhunter


rkhunter is a shell script which carries out various checks on the local system to try and detect known rootkits and malware. It also performs checks to see if commands have been modified, if the system startup files have been modified, and various checks on the network interfaces, including checks for listening applications.   rkhunter has been written to be as generic as possible, and so should run on most Linux and UNIX systems. It is provided with some support scripts should certain commands be missing from the system, and some of these are perl scripts. rkhunter does require certain commands to be present for it to be able to execute. Additionally, some tests require specific commands, but if these are not present then the test will be skipped. rkhunter needs to be run under a Bourne-type shell, typically bash or ksh. rkhunter can be run as a cron job or from the command-line. INSTALLATION 1. cd /usr/local/src ; 2. wget  /download ; 3. tar -xzvf rkhunter-1.4.0.tar.gz ; 4. rm -f rkhunter-1.3.8.tar.gz ; 5. cd rkhunter-1.3.8 ; 6. sh --layout /usr --install ; 7. rkhunter --update ;USAGE# rkhunter -c --sk[...]

How to install and use Maldetect


Linux Malware Detect (LMD) is a malware scanner for Linux released under the GNU GPLv2 license, that is designed around the threats faced in shared hosted environments. It uses threat data from network edge intrusion detection systems to extract malware that is actively being used in attacks and generates signatures for detection.
Installation Steps
1. wget
2. tar -xzvf maldetect-current.tar.gz
3. sh

How to start scanning...
maldet -a  your desired directory  (run it on a screen.)

Eg: maldet -a /home

Installation Of Minecraft Server on Linux


What is Minecraft Server?Minecraft servers allow players to play online games with other people. They may either be run on a hosted Minecraft server service, a dedicated server, a Virtual Private Server or a home machine.Steps1.Installing Java To check if Java has been installed type the following into the terminal: which javaIf not you will need to install it, the following will install Java-JDK 1.6:yum install java-1.6.0-openjdk2.mkdir minecraft4.Time to get Minecraft:  wget it has finished downloading you will need to make sure Minecraft has the correct permissions:chmod +x minecraft_server.jar6.Launching MinecraftLaunching the Minecraft server on a Linux server is different from launching on a Windows based server because when you close the SSH terminal Minecraft will also close. To get around this we will need to install "screen" which will keep the Minecraft server running after the SSH terminal is closed. To install screen:yum install screenNow that screen is installed we will use it run the server: screenStarting up Minecraft now:java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui(*1024 value can be changed depending on how much RAM your VPS has)Ex. 512MB VPS - java -Xmx512M -Xms512M -jar minecraft_server.jar noguiEx. 2048MB VPS - java -Xmx2048M -Xms2048M -jar minecraft_server.jar noguiYour Minecraft server is now up and running and you will be able to play with all your friends.[...]

Apache::LimitIPConn Module Installation



Block Proxy Servers from accessing the website


Block proxy servers by HTTP protocols. If you don’t want to purchase software, there is another way. You can insert a script in your website’s root htaccess file. It’s best to copy and paste the code, rather than type it. That way, you can be sure that you won’t make any errors. After you’ve inserted the code, upload it to your server. This method is effective. Insert the following code:
paste the below entries in .htaccess file
RewriteEngine on
RewriteCond %{HTTP:VIA}  !^$ [OR]
RewriteCond %{HTTP:FORWARDED}  !^$ [OR]
RewriteCond %{HTTP:USERAGENT_VIA}  !^$ [OR]
RewriteCond %{HTTP:X_FORWARDED_FOR}  !^$ [OR]
RewriteCond %{HTTP:PROXY_CONNECTION}  !^$ [OR]
RewriteCond %{HTTP:HTTP_PC_REMOTE_ADDR} !^$ [OR]
RewriteCond %{HTTP:HTTP_CLIENT_IP}  !^$
RewriteRule ^(.*)$ - [F]

How to install and use Skipfish


What is skipfish?Skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes. The resulting map is then annotated with the output from a number of active (but hopefully non-disruptive) security checks. The final report generated by the tool is meant to serve as a foundation for professional web application security assessments. 1.First install these packages           yum install gcc openssl-devel libidn libidn-devel2.  cd /usr/local/src3.  mkdir skipfish4.  cd skipfish5.  wget  tar -zxf ./skipfish-2.07b.tgz7.  cd skipfish-2.07b8.  make 9.  cp dictionaries/complete.wl skipfish.wl10. mkdir /tmp/skipfish11 (Testing Skipfish)    ./skipfish -o /tmp/skipfish (It gives the below output) [...]

Cloud Computing - - >> Next Generation Computing


Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service(PaaS)  and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.
A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.

Infrastructure-as-a-Service like Amazon Web Services provides virtual server instanceAPI) to start, stop, access and configure their virtual servers and storage. In the enterprise, cloud computing allows a company to pay for only as much capacity as is needed, and bring more online as soon as required. Because this pay-for-what-you-use model resembles the way electricity, fuel and water are consumed, it's sometimes referred to as utility computing.

Platform-as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gatewaysoftware installed on the customer's computer., (an outgrowth of and GoogleApps are examples of PaaS. Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider's platform.

In the software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user is free to use the service from anywhere.


Nagios Installation and Configuration


Nagios is  the popular open source computer system and network monitoring software application, watches hosts and services, alerting users when things go wrong, and when they get better. This article deals with the step by step installation and configuration of Nagios. Here it goes: Become RootLogin as rootDownload the latest version of Nagios from The DistributionTo unpack the Nagios distribution:tar xzf nagios-version.tar.gz cd nagios-versionCreate Nagios User/GroupAdd a new user (and group) to the system with the following command :adduser nagios Create Installation DirectoryCreate the base directory where to install Nagios as follows…mkdir /usr/local/nagios Change the owner of the base installtion directory to be the Nagios user and group you added earlier as follows:chown nagios.nagios /usr/local/nagios Identify Web Server UserThe following command can be used to quickly determine what user Apache is running as :grep “^User” /etc/httpd/conf/httpd.confAdd Command File GroupCreate a new group whose members include the user of the web server is running as and the user Nagios is running . Call this new group ‘nagcmd‘ & name it ./usr/sbin/groupadd nagcmdNext, add the users that web server and Nagios run as to the newly created group with the following commands:/usr/sbin/usermod -G nagcmd apache/usr/sbin/usermod -G nagcmd nagiosRun the Configure ScriptRun the configure script to initialize variables and create a Makefile as follows…(the last two options: –with-command-xxx are optional, but needed if you want to issue external commands)./configure –prefix=prefix –with-cgiurl=cgiurl –with-htmurl=htmurl –with-nagios-user=someuser –with-nagios-group=somegroup –with-command-group=cmdgroup Replace prefix with the installation directory that you created in the step above (default is /usr/local/nagios)Replace cgiurl with the actual url you will be using to access the CGIs (default is /nagios/cgi-bin). Do NOT append a slash at the end of the url.Replace htmurl with the actual url you will be using to access the HTML for the main interface and documentation (default is /nagios/)Replace someuser with the name of a user on your system that will be used for setting permissions on the installed files (default is nagios)Replace somegroup with the name of a group on your system that will be used for setting permissions on the installed files (default is nagios)Replace cmdgroup with the name of the group running the web server (default is nagios, in the example above it was nagcmd). This will allow group members (i.e. your web server) to be able to submit external commands to Nagios.OR./configure To configure with the default options,not needed to provide all the options given as aboveCompile BinariesCompile Nagios and the CGIs with the following command:make all Installing The Binaries And HTML FilesInstall the binaries and HTML files with the following command:make install Installing An Init ScriptInstall the sample init script to /etc/rc.d/init.d/nagios with the following command:make install-init Directory Structure And File Locationscd /usr/local/nagios You should see five different subdirectories. A brief description of what each directory contains is given in the table below.Sub-DirectoryContentsbin/Nagios core programetc/Main, resource, object, and CGI configuration files should be put heresbin/CGIsshare/HTML files (for web interface and online documentation)var/Empty directory for the log file, status file, retention file, etc.var/archivesEmpty directory for the archived logsvar/rwEmpty directory for the external command fileOpen the Apache configuration file & add the follow[...]

Installation Of FFMPEG And FFMPEG-PHP in LINUX


p, li { white-space: pre-wrap; }Method #1: Using yum First, make sure the following binary packages are installed on your server:gcc, gcc4, gcc4-c++, gcc4-gfortran, gd, gd-devel, gmake, ImageMagick, ImageMagick-devel, libcpp, libgcc, libstdc++, make, ncurses, ncurses-devel, ruby, subversion If any of these packages are missing, install them using Yum . For example:yum install PACKAGEInstall rpmforge repository. Follow the instructions on CentOS Wikiwget -ivh rpmforge-release-0.5.2-2.el5.rf.*.rpmInstall ffmpeg, mplayer, mencoder with all supported libraries/modulesyum -y install ffmpeg ffmpeg-devel mplayer mencoder flvtool2Manually, install FFmpeg-Phpcd /usr/local/srcwget jxvf ffmpeg-php-0.6.0.tbz2cd ffmpeg-php-0.6.0phpize./configuremakemake install If FFmpeg-Php is compiled successfully, an module will be generated and copied into the default Php directory. Next,�run the following command to enable FFmpeg-Php. By running this command you will be�adding module into the�php.ini file:echo '' >> /local_path_to_your/php.iniFinal step, restart apacheservice httpd restartOR/etc/init.d/httpd restart- Testing FFmpegVerify that FFmpeg is working properly by running the following two commands:php -r 'phpinfo();' | grep ffmpegYou will get a few lines similar to the following:ffmpegffmpeg-php version => 0.6.0-svnffmpeg-php built on => April� 15 2010 15:31:45ffmpeg-php gd support� => enabledffmpeg libavcodec version => Lavc51.62.0ffmpeg libavformat version => Lavf52.18.0ffmpeg swscaler => disabledffmpeg.allow_persistent => 0 => 0ffmpeg.show_warnings => 0 => 0This is the second command to make sure that FFmpeg is working properly:/usr/local/bin/ffmpegIf you do not get any errors after running the test commands above, FFmpeg, FFmpeg-Php, MPlayer, MEncoder, and FLV2tool are working properly on your server. CONGRATULATIONS!Method #1: Source archives/packages*** CautionYou must remove all previous installation of FFmpeg and FFmpeg-Php, then follow installation instructions below.Follow these steps (in that order):First, make sure the following binary packages are installed on your server:gcc, gcc4, gcc4-c++, gcc4-gfortran, gd, gd-devel, gmake, ImageMagick, ImageMagick-devel, libcpp, libgcc, libstdc++, make, ncurses, ncurses-devel, ruby, subversion If any of these packages are missing, install them using Yum . For example:yum install PACKAGE*** CautionThe following source packages are always updated with newer versions. You might experience technical issues if you download and install a newer/older version of any of these applications.You must remove all previous installation of FFmpeg and FFmpeg-Php, then follow installation instructions below.To install FFmpeg from source, execute the following commands (in that order).Let's create a directory to do our work in:mkdir /usr/local/srccd /usr/local/srcDownload source packageswget http://biznetnetworks.dl.sourceforge...php-0.6.0.tbz2wget[...]



 Linux Interview Questions and AnswersYou need to see the last fifteen lines of the files dog, cat and horse. Whatcommand should you use?tail -15 dog cat horseThe tail utility displays the end of a file. The -15 tells tail to display the last fifteenlines of each specified file.Who owns the data dictionary?The SYS user owns the data dictionary. The SYS and SYSTEM users are createdwhen the database is created.You routinely compress old log files. You now need to examine a log from twomonths ago. In order to view its contents without first having to decompressit, use the _________ utility.zcatThe zcat utility allows you to examine the contents of a compressed file much thesame way that cat displays a file.You suspect that you have two commands with the same name as thecommand is not producing the expected results. What command can you useto determine the location of the command being run?whichThe which command searches your path until it finds a command that matches thecommand you are looking for and displays its full path.You locate a command in the /bin directory but do not know what it does.What command can you use to determine its purpose.whatisThe whatis command displays a summary line from the man page for the specifiedcommand.You wish to create a link to the /data directory in bob's home directory so youissue the command ln /data /home/bob/datalink but the command fails. Whatoption should you use in this command line to be successful.Use the -F optionIn order to create a link to a directory you must use the -F option.When you issue the command ls -l, the first character of the resulting displayrepresents the file's ___________.typeThe first character of the permission block designates the type of file that is beingdisplayed.What utility can you use to show a dynamic listing of running processes?__________topThe top utility shows a listing of all running processes that is dynamically updated.Where is standard output usually directed?to the screen or displayBy default, your shell directs standard output to your screen or display.You wish to restore the file memo.ben which was backed up in the tarfileMyBackup.tar. What command should you type?tar xf MyBackup.tar memo.benThis command uses the x switch to extract a file. Here the file memo.ben will berestored from the tarfile MyBackup.tar.You need to view the contents of the tarfile called MyBackup.tar. Whatcommand would you use?tar tf MyBackup.tarThe t switch tells tar to display the contents and the f modifier specifies which fileto examine.You want to create a compressed backup of the users' home directories. Whatutility should you use?tarYou can use the z modifier with tar to compress your archive at the same time ascreating it.What daemon is responsible for tracking events on your system?syslogdThe syslogd daemon is responsible for tracking system information and saving it tospecified log files.You have a file called phonenos that is almost 4,000 lines long. What text filtercan you use to split it into four pieces each 1,000 lines long?splitThe split text filter will divide files into equally sized pieces. The default length ofeach piece is 1,000 lines.You would like to temporarily change your command line editor to be vi.What command should you type to change it?set -o viThe set command is used to assign environment variables. In this case, you areinstructing your shell to assign vi as your command line editor. However, onceyou log off and log back in you will return to the previously defined command lineeditor.What account is created when you install Linux?rootWhenever you install Linux, only one user account is created. This is t[...]



An A-Z Index of the Bash command line for Linux.alias Create an alias •apropos Search Help manual pages (man -k)apt-getSearch for and install software packages(Debian/Ubuntu)aptitude Search for and install software packages(Debian/Ubuntu)aspell Spell CheckerawkFind and Replace text, databasesort/validate/indexbbasename Strip directory and suffix from filenamesbash GNU Bourne-Again SHellbcArbitrary precision calculator languagebgSend to backgroundbreak Exit from a loop •builtin Run a shell builtinbzip2 Compress or decompress named file(s)ccalDisplay a calendarcase Conditionally perform a commandcatConcatenate and print (display) the content of filescdChange Directorycfdisk Partition table manipulator for Linuxchgrp Change group ownershipchmod Change access permissionschown Change file owner and groupchroot Run a command with a different root directorychkconfig System services (runlevel)cksum Print CRC checksum and byte countsclear Clear terminal screencmpCompare two filescomm Compare two sorted files line by linecommand Run a command - ignoring shell functions •continue Resume the next iteration of a loop •cpCopy one or more files to another locationcron Daemon to execute scheduled commandscrontab Schedule a command to run at a later timecsplit Split a file into context-determined piecescutDivide a file into several partsddate Display or change the date & timedcDesk CalculatorddConvert and copy a file, write disk headers, bootrecordsddrescue Data recovery tooldeclare Declare variables and give them attributes •dfDisplay free disk spacediff Display the differences between two filesdiff3 Show differences among three filesdigDNS lookupdirBriefly list directory contentsdircolors Colour setup for `ls'dirname Convert a full pathname to just a pathdirs Display list of remembered directoriesdmesg Print kernel & driver messagesduEstimate file space usageeecho Display message on screen •egrepSearch file(s) for lines that match an extendedexpressioneject Eject removable mediaenable Enable and disable builtin shell commands •envEnvironment variablesethtool Ethernet card settingseval Evaluate several commands/argumentsexec Execute a commandexit Exit the shellexpect Automate arbitrary applications accessed over aterminalexpand Convert tabs to spacesexport Set an environment variableexpr Evaluate expressionsffalse Do nothing, unsuccessfullyfdformat Low-level format a floppy diskfdisk Partition table manipulator for LinuxfgSend job to foregroundfgrep Search file(s) for lines that match a fixed stringfile Determine file typefind Search for files that meet a desired criteriafmtReformat paragraph textfold Wrap text to fit a specified width.forExpand words, and execute commandsformat Format disks or tapesfree Display memory usagefsck File system consistency check and repairftpFile Transfer Protocolfunction Define Function Macrosfuser Identify/kill the process that is accessing a fileggawk Find and Replace text within file(s)getopts Parse positional parametersgrep Search file(s) for lines that match a given patterngroupadd Add a user security groupgroupdel Delete a groupgroupmod Modify a groupgroups Print group names a user is ingzip Compress or decompress named file(s)hhashRemember the full pathname of a nameargumenthead Output the first part of file(s)help Display help for a built-in command •history Command Historyhostname Print or set system nameiiconv Convert the character set of a fileidPrint user and group id'sifConditionally perform a commandifconfig Configure a network interfaceifdown Stop a network interfaceifup Start a network interface upimport Captur[...]

RSS--Really Simple Syndication


RSS (most commonly expanded as Really Simple Syndication) is a family of web feed formats used to publish frequently updated works—such as blog entries, news headlines, audio, and video—in a standardized format.[2] An RSS document (which is called a "feed", "web feed",[3] or "channel") includes full or summarized text, plus metadata such as publishing dates and authorship. Web feeds benefit publishers by letting them syndicate content automatically. They benefit readers who want to subscribe to timely updates from favored websites or to aggregate feeds from many sites into one place. RSS feeds can be read using software called an "RSS reader", "feed reader", or "aggregator", which can be web-based, desktop-based, or mobile-device-based. A standardized XML file format allows the information to be published once and viewed by many different programs. The user subscribes to a feed by entering into the reader the feed's URI or by clicking an RSS icon in a web browser that initiates the subscription process. The RSS reader checks the user's subscribed feeds regularly for new work, downloads any updates that it finds, and provides a user interface to monitor and read the feeds.RSS formats are specified using XML, a generic specification for the creation of data formats. Although RSS formats have evolved from as early as March 1999,[4] it was between 2005 and 2006 when RSS gained widespread use, and the ("Feed-icon.svg") icon was decided upon by several major Web browsers.[5]HistoryMain article: History of web syndication technologyThe RSS formats were preceded by several attempts at web syndication that did not achieve widespread popularity. The basic idea of restructuring information about websites goes back to as early as 1995, when Ramanathan V. Guha and others in Apple Computer's Advanced Technology Group developed the Meta Content Framework.[6] For a more detailed discussion of these early developments, see the history of web syndication technology.RDF Site Summary, the first version of RSS, was created by Guha at Netscape in March 1999 for use on the My.Netscape.Com portal. This version became known as RSS 0.9.[4] In July 1999, Dan Libby of Netscape produced a new version, RSS 0.91,[2] which simplified the format by removing RDF elements and incorporating elements from Dave Winer's scriptingNews syndication format.[7] Libby also renamed RSS "Rich Site Summary" and outlined further development of the format in a "futures document".[8]This would be Netscape's last participation in RSS development for eight years. As RSS was being embraced by web publishers who wanted their feeds to be used on My.Netscape.Com and other early RSS portals, Netscape dropped RSS support from My.Netscape.Com in April 2001 during new owner AOL's restructuring of the company, also removing documentation and tools that supported the format.[9]Two entities emerged to fill the void, with neither Netscape's help nor approval: The RSS-DEV Working Group and Winer, whose UserLand Software had published some of the first publishing tools outside of Netscape that could read and write RSS.Winer published a modified version of the RSS 0.91 specification on the UserLand website, covering how it was being used in his company's products, and claimed copyright to the document.[10] A few months later, UserLand filed a U.S. trademark registration for RSS, but failed to respond to a USPTO trademark examiner's request and the request was rejected in December 2001.[11]The RSS-DEV Working Group, a project whose members included Guha and representatives of O'Reilly Media and Mo[...]

Ruby (programming language)


Ruby is a dynamic, reflective, general purpose object-oriented programming language that combines syntax inspired by Perl with Smalltalk-like features. Ruby originated in Japan during the mid-1990s and was first developed and designed by Yukihiro "Matz" Matsumoto. It was influenced primarily by Perl, Smalltalk, Eiffel, and Lisp.Ruby supports multiple programming paradigms, including functional, object oriented, imperative and reflective. It also has a dynamic type system and automatic memory management; it is therefore similar in varying respects to Python, Perl, Lisp, Dylan, Pike, and CLU.The standard 1.8.7 implementation is written in C, as a single-pass interpreted language. There is currently no specification of the Ruby language, so the original implementation is considered to be the de facto reference. As of 2010[update], there are a number of complete or upcoming alternative implementations of the Ruby language, including YARV, JRuby, Rubinius, IronRuby, MacRuby and HotRuby, each of which takes a different approach, with IronRuby, JRuby and MacRuby providing just-in-time compilation and MacRuby also providing ahead-of-time compilation. The official 1.9 branch uses YARV, as will 2.0 (development), and will eventually supersede the slower Ruby MRI.HistoryYukihiro Matsumoto, the creator of Ruby.Ruby was conceived on February 24, 1993 by Yukihiro Matsumoto who wished to create a new language that balanced functional programming with imperative programming.[1] Matsumoto has stated, "I wanted a scripting language that was more powerful than Perl, and more object-oriented than Python. That's why I decided to design my own language".[2][edit] Etymology of the name "Ruby"The name "Ruby" was decided on during an online chat session between Matsumoto and Keiju Ishitsuka on February 24, 1993, before any code had been written for the language.[3] Initially two names were proposed: "Coral" and "Ruby", with the latter being chosen by Matsumoto in a later email to Ishitsuka.[4] Matsumoto has later stated that a factor in choosing the name "Ruby" was because it was the birthstone of one of his colleagues.[5][edit] First publicationThe first public release of Ruby 0.95 was announced on Japanese domestic newsgroups on December 21, 1995.[6][7] Subsequently three more versions of Ruby were released in two days.[3] The release coincided with the launch of the Japanese language ruby-list mailing list which was the first mailing list for the new language.Already present at this stage of development were many of the features familiar in later releases of Ruby, including object-oriented design, classes with inheritance, mixins, iterators, closures, exception handling, and garbage collection.[8][edit] Ruby 1.0Ruby reached version 1.0 on December 25, 1996.[3]Following the release of Ruby 1.3 in 1999 the first English language mailing list ruby-talk began,[2] which signalled a growing interest in the language outside of Japan. In September 2000, the first English language book Programming Ruby was printed, which was later freely released to the public further widening the adoption of Ruby amongst English speakers.[edit] Ruby on RailsAround 2005, interest in the Ruby language surged in tandem with Ruby on Rails, a popular web application framework written in Ruby. Rails is frequently credited with making Ruby "famous" and the association is so strong that the two are sometimes conflated by programmers who are new to Ruby.[9][edit] Ruby 1.9.1The latest stable version of the reference implementation is 1.9.1.Ruby 1.9.1 introduces many s[...]

Go (programming language)


Go is a compiled, garbage-collected, concurrent programming language developed by Google Inc.[3]The initial design of Go was started in September 2007 by Robert Griesemer, Rob Pike, and Ken Thompson,[1] building on previous work related to the Inferno operating system.[4] Go was officially announced in November 2009, with implementations released for the Linux and Mac OS X platforms.[5] At the time of its launch, Go was not considered to be ready for adoption in production environments.[6] In May 2010, Rob Pike stated publicly that Go is being used "for real stuff" at Google.[7]DescriptionThe syntax of Go is close to that of C except for the type declarations; other syntactical differences are the missing parentheses around for and if expressions. It is designed for exceptionally fast compilation times, even on modest hardware.[8] The language requires garbage collection. Certain concurrency-related structural conventions of Go (channels and alternative channel inputs) are borrowed from Tony Hoare's CSP. Unlike previous concurrent programming languages such as occam or Limbo, Go does not provide any in-built notion of safe or verifiable concurrency.[9] Today, Go does not have any kind of built-in generics implementation, but one may be added in the future.[10]Features not included in Go are type inheritance, generic programming, assertions, method overloading, and pointer arithmetic.[1] Of these, the Go authors express an openness to generic programming, explicitly argue against assertions and pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful language.[1] Initially, the language did not include exception handling, but this was added in March 2010.[11][12] Maps (also known as hashes or dictionaries) are an intrinsic part of the language, as are strings.Visibility of functions outside of their defining file is defined implicitly according to the capitalization of their identifier, in contrast to C, where an explicit static keyword is used.[13][edit] ImplementationsThere are currently two Go compilers. 6g (and its supporting tools, collectively known as gc) are in C, using yacc/Bison for the parser. Gccgo is a Go compiler with a C++ front-end with a recursive descent parser coupled to the standard GCC backend.[14] Both compilers only work on Unix-like systems, although a Windows version for non-production use is available;[15] it is maintained by a developer named Hector Chu[16] separate from the Go language team.[17] There is also a Go version available for Cygwin.[18] Recently, Windows ports based on mingw are being deployed and updated.[19][edit] ExamplesThe following is a Hello world program in Go:package main import "fmt" func main() { fmt.Println("Hello, World")}Go's automatic semicolon insertion feature requires that opening braces not be placed on their own lines, and this is thus the preferred brace style; the examples shown comply with this style.[20]Example illustrating how to write a program like the Unix echo command in Go:[21]package main import ( "os" "flag" // command line option parser) var omitNewline = flag.Bool("n", false, "don't print final newline") const ( Space = " " Newline = "\n") func main() { flag.Parse() // Scans the arg list and sets up flags var s string = "" for i := 0; i < flag.NArg(); i++ { if i > 0 { s += Space } s += flag.Arg(i) } if !*omitNewline { s += Newline } os.Stdout.WriteString(s)}[edit] ReceptionGo's initial release led to much discussion.Michele Simionato wrote in an article for[...]

Django (web framework)


Django (pronounced /ˈdʒæŋɡoʊ/ JANG-goh[1]) is an open source web application framework, written in Python, which follows the model-view-controller architectural pattern.[2] It was originally developed to manage several news-oriented sites for The World Company[3] of Lawrence, Kansas, and was released publicly under a BSD license in July 2005; the framework was named after gypsy jazz guitarist Django Reinhardt.[4] In June 2008 it was announced that a newly formed Django Software Foundation will take care of Django in the future. [5]Django's primary goal is to ease the creation of complex, database-driven websites. Django emphasizes reusability and "pluggability" of components, rapid development, and the principle of DRY (Don't Repeat Yourself). Python is used throughout, even for settings, files, and data models.Django also provides an optional administrative CRUD (create, read, update and delete) interface that is generated dynamically through introspection and configured via admin models.ComponentsScreenshot of the Django admin interface for modifying a user.The core Django framework consists of an object-relational mapper which mediates between data models (defined as Python classes) and a relational database; a regular-expression-based URL dispatcher; a view system for processing requests; and a templating system.Also included in the core framework are: * A lightweight, standalone web server for development and testing. * A form serialization and validation system which can translate between HTML forms and values suitable for storage in the database. * A caching framework which can use any of several cache methods. * Support for middleware classes which can intervene at various stages of request processing and carry out custom functions. * An internal dispatcher system which allows components of an application to communicate events to each other via pre-defined signals. * An internationalization system, including translations of Django's own components into a variety of languages. * A serialization system which can produce and read XML and/or JSON representations of Django model instances. * A system for extending the capabilities of the template engine. * An interface to Python's built-in unit test framework.[edit] Bundled applicationsThe main Django distribution also bundles a number of applications in its "contrib" package, including: * An extensible authentication system. * The dynamic administrative interface. * Tools for generating RSS and Atom syndication feeds. * A flexible commenting system. * A sites framework that allows one Django installation to run multiple websites, each with their own content and applications * Tools for generating Google Sitemaps. * Tools for preventing cross-site request forgery. * Template libraries which enable the use of lightweight markup languages such as Textile and Markdown. * A framework for creating GIS applications.[edit] Applications built on Django * The Pinax framework provides reusable applications aimed at Django-based social networking websites. * RapidSMS is a framework for SMS applications built on Django. * Pootle is an online translation management tool. * Review Board is a web-based code review tool.[edit] Server arrangementsDjango can be run in conjunction with Apache using mod_python or mod_wsgi. Django also includes the ability to launch a FastCGI server, enabling use behind any web server which supports FastCGI. It should[...]

Brain–computer interface


A brain–computer interface (BCI), sometimes called a direct neural interface or a brain–machine interface, is a direct communication pathway between a brain and an external device. BCIs are often aimed at assisting, augmenting or repairing human cognitive or sensory-motor functions.Research on BCIs began in the 1970s at the University of California Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA.[1][2] The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature.The field of BCI has since blossomed spectacularly, mostly toward neuroprosthetics applications that aim at restoring damaged hearing, sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels.[3] Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-nineties.BCI versus neuroprostheticsMain article: NeuroprostheticsNeuroprosthetics is an area of neuroscience concerned with neural prostheses—using artificial devices to replace the function of impaired nervous systems or sensory organs. The most widely used neuroprosthetic device is the cochlear implant, which, as of 2006, has been implanted in approximately 100,000 people worldwide.[4] There are also several neuroprosthetic devices that aim to restore vision, including retinal implants.The differences between BCIs and neuroprosthetics are mostly in the ways the terms are used: neuroprosthetics typically connect the nervous system to a device, whereas BCIs usually connect the brain (or nervous system) with a computer system. Practical neuroprosthetics can be linked to any part of the nervous system—for example, peripheral nerves—while the term "BCI" usually designates a narrower class of systems which interface with the central nervous system.The terms are sometimes used interchangeably, and for good reason. Neuroprosthetics and BCIs seek to achieve the same aims, such as restoring sight, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques.[edit] Animal BCI researchRats implanted with BCIs in Theodore Berger's experimentsSeveral laboratories have managed to record signals from monkey and rat cerebral cortices in order to operate BCIs to carry out movement. Monkeys have navigated computer cursors on screen and commanded robotic arms to perform simple tasks simply by thinking about the task and without any motor output.[5] In May 2008 photographs that showed a monkey operating a robotic arm with its mind at the Pittsburgh University Medical Center were published in a number of well known science journals and magazines.[6] Other research on cats has decoded visual signals.[edit] Early workMonkey operating a robotic arm with brain–computer interfacingThe operant conditioning studies of Fetz and colleagues first showed that monkeys could learn to control the deflection of a biofeedback meter arm with neural activity .[7] Such work in the 1970s established that monkeys could quickly learn to voluntarily control the firing rates of individual and multiple neurons in the primary motor cortex if they were rewarded for generating appropriate patterns of neural activity.[8]Studies that developed algorithms [...]

സുര്ഫസ് കമ്പ്യൂട്ടര്‍


A surface computer is a computer that interacts with the user through the surface of an ordinary object, rather than through a monitor and keyboard.

The category was created by Microsoft with Surface (codenamed Milan), the surface computer from Microsoft which was based entirely on a Multi-Touch interface and using a coffee-table like design, and was unveiled on 30 May 2007. Users can interact with the machine by touching or dragging their fingertips and objects such as paintbrushes across the screen, or by setting real-world items tagged with special bar-code labels on top of it.

The Surface is a horizontal display on a table-like form. Somewhat similar to the iPhone, the Surface has a screen that can incorporate multiple touches and thus uses them to navigate multimedia content. Unlike the iPhone, which uses fingers' electrical properties to detect touch, the Surface utilizes a system of infrared cameras to detect input. Uploading digital files only requires each object (e.g. a Bluetooth-enabled digital camera) to be placed on the Surface. People can physically move around the picture across the screen with their hands, or even shrink or enlarge them. The first units of the Surface will be information kiosks in the Harrah's family of casinos.

Besides the microsoft-created devices, other computer firms have also entered the surface computing market. These include Mitsubishi Electric with its DiamondTouch, and Smart Surface Sdn Bhd [1] with its SmartSurface.

Also receiving units will be T-Mobile, for comparing several cell phones side-by-side, and Sheraton Hotels and Resorts, which will use Surface to service lobby customers in numerous ways.[2][3]

The Surface has a 2.0GHz Core 2 Duo processor, 2GB of memory, an off the shelf graphics card, a scratch-proof spill-proof surface, a DLP projector, and 5 infrared cameras as mentioned above. However, the expensive components required for the interface also give the Surface a price tag of between $12,500 to $15,000.[4]

A table top computer from Microsoft that incorporates multitouch, hand gestures and optical recognition of objects placed on the screen. Introduced in 2008, the Surface's 30" touch screen is used without a mouse and keyboard and is large enough for group participation. See multitouch.



SmartWater is an anti-criminal system marketed in the United Kingdom by Smartwater Technology Ltd.It consists of a liquid containing a code which can be read under ultraviolet light.[1] It is intended to be applied to valuable items, so that if they are stolen and later seized by police, their original owner can be determined.Another application is a sprinkler system that sprays a burglar with the (invisible) fluid, which can't be washed off and lasts for months, to generate evidence which connects a suspect to a specific location.Development of SmartWater was started in the mid-1990s by Phil Cleary, a retired police detective and later CEO of SmartWater Ltd, and his brother Mike Cleary, a chemist.SmartWater comes in three variants, "Index Solutions", "Indsol Tracer" and "SmartWater Instant", which use different techniques to embed such a code - which, according to Phil Cleary, allows "millions of chemical signatures" and is an identifier superior to genetic fingerprinting DNA.The "Index Solutions" variant is a water-based solution containing low-level additives, which are blended using a binary sequence to ensure uniqueness. The Index Solution is contained within a spray system which is activated by a intruder detection unit, similar to a burglar alarm, and marks the intruder with a unique forensic spray, which the police locate using a black (UV) light [4]The "Indsol Tracer" variant is a polymer emulsion[5] which blends different chemical agents according to a binary code allowing 10 billion different possibilities, as stated by the company.[3]The "SmartWater Instant" variant consists mainly of a copolymer of vinyl acetate in isopropyl alcohol.[6] This fluid contains millions of tiny fragments; a unique number called "SIN" ("SmartWater identification number") and registered in a national police database together with the owner's details, is etched into each of those particles.[3]Security expert Bruce Schneier has pointed out that abuse of SmartWater is possible, because an owner of a personalised solution can easily administer it to other people's valuable items. [7] However, in a later article, Schneier accepted that SmartWater worked as a deterrent citing the publication of a research paper prepared by a team led by Professor Martin Gill, [8] who interviewed over 100 criminals and asked whether or not the presence of SmartWater would deter them from committing a burglary, with 74% saying that it would[9]SmartWater has been used by police to convict both Tier 1 [10] and Tier 2 criminals [11] [12] and company claims in press releases to have over 600 convictions to their name.In addition, the company developed a holistic crime reduction programme, called 'The SmartWater Strategy'. During the first six months of a pilot scheme involving 100 households in a part of Kent, police recorded a reduction in burglary of 94% [13].Another area that has used the 'SmartWater Strategy' is Nottingham, where 30,000 homes have now had their property marked with individual SmartWater signatures and covert operations using SmartWater were instigated by the police. There has been a reported a 40% reduction in burglary since the start of the initiative. DEMO: HOW TO ADD GADGETS IN BLOG(THIS VIDEO SHOWS IT)[...]