Subscribe: Inderscience
Added By: Feedage Forager Feedage Grade B rated
Language: English
based  control  data  learning  method  mobile learning  mobile  model  paper  performance  proposed  results  study  system  waste 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Inderscience


This New Articles Channel contents the latest articles published in Inderscience's distinguished academic, scientific and professional journals.


A fast and parallel algorithm for frequent pattern mining from big data in many-task environments
Many studies have tried to efficiently discover frequent patterns in large databases. The algorithms used in these studies fall into two main categories: apriori algorithms and frequent pattern growth (FP-growth) algorithms. Apriori algorithms operate according to a generate-and-test approach, so performance suffers from the testing of too many candidate itemsets. Therefore, most recent studies have applied an FP-growth approach to the discovery of frequent patterns. The rapid growth of data, however, has introduced new challenges for the mining of frequent patterns, in terms of both execution efficiency and scalability. Big data often contains a large number of items, a large number of transactions and long average transaction length, which result in large FP-trees. In addition to its dependence on data characteristics, FP-tree size is also sensitive to the minimum support threshold. This is because the small support is probable to bring many branches for nodes, greatly enlarging the FP-tree and the number of reconstructed conditional pattern-based trees. In this paper, we propose a novel algorithm and architecture for efficiently mining frequent patterns from big data in distributed many-task computing environments. Through empirical evaluation of various simulation conditions, we show that the proposed method delivers excellent execution time.

A real time vehicle management system implementation on cloud computing platform
This study attempts to use the high-speed computing capability of cloud computing and smart phones to achieve a vehicle management system running on moving cars. Our researches exploit smart phones or tablet PCs as a car machine that provides an application of location-based services on the mobile device with global position system (GPS), wireless camcorder, Google-map, visualisation information, and graphical presentation to provide personalised services (Zhang et al., 2010; Sultan, 2010). This study allows users instantly to access the information and manage the movement of cars. Additionally, the location of golf carts, surrounding environment and personnel information are transmitted through a wireless network to the monitor centre. Thus monitor centre can achieve real-time services, motorcade managements and caddy caring services. If the proposed mechanism performs well, this study will be extended to some other applications in the near future.

pvFPGA: paravirtualising an FPGA-based hardware accelerator towards general purpose computing
This paper presents an ameliorated design of pvFPGA, which is a novel system design solution for virtualising an FPGA-based hardware accelerator by a virtual machine monitor (VMM). The accelerator design on the FPGA can be used for accelerating various applications, regardless of the application computation latencies. In the implementation, we adopt the Xen VMM to build a paravirtualised environment, and a Xilinx Virtex-6 as an FPGA accelerator. The data transferred between the x86 server and the FPGA accelerator through direct memory access (DMA), and a streaming pipeline technique is adopted to improve the efficiency of data transfer. Several solutions to solve streaming pipeline hazards are discussed in this paper. In addition, we propose a technique, hyper-requesting, which enables portions of two requests bidding to different accelerator applications to be processed on the FPGA accelerator simultaneously through DMA context switches, to achieve request level parallelism. The experimental results show that hyper-requesting reduces request turnaround time by up to 80%.

ReconsMap: a reliable controller-switch mapping in software defined networks
The software defined networking (SDN) innovates the future network structure, decoupling the control plane and data plane. In large SDN networks, multiple controllers or controller domains are deployed, where each controller has a logically centralised vision while managing a set of switches. Recent studies focus more on controller placement, but simply assign switches to its closest controller. Such 'latency-first' controller-switch mapping may lead to controller overload and vulnerability of the spanning tree. In this paper, we illustrate this case and propose a reliable controller-switch mapping model (ReconsMap model). This model: 1) adjusts the number of mapping switches according to the network traffic; 2) ensures an accepted propagation delay; 3) builds a robust spanning tree for each controller to ensure the lowest data loss. Computational results on OS3E and topology zoo show that ReconsMap effectively reduces the data loss, improves the controller overload and the robustness of the spanning tree.

Improving the performance by message broadcasting in VANETS
In vehicular adhoc networks, messages are broadcast by active mobile nodes spontaneously to all the neighbourhood nodes within the connectivity range. These important messages may have severe delay and are time sensitive. The main aim of this work is to broadcast the safety message, to avoid the packet collision or to reduce the packet loss so that the efficiency of the network is improved and these can be analysed using different protocols in vehicular adhoc network. In typical carrier sense multiple accesses the users that content with channel access do not seem to be suitable for this application. Protocol sequence is the method we are following to broadcast the safety message. 0s and 1s are the protocol sequences used. When each user in the network reads the 0s and 1s they transmit the packet in the time slot. It does not require time synchronisation between the mobile nodes. We compare the delay performance with dedicated short range communication protocol, ALOHA type random access scheme, and zone routing protocol. By arranging the data packets a hard assurance of delay might be achieved. The delay in the network reduces in zone routing protocol.

Improved reconfigurable hyper-pipeline soft-core processor on FPGA for SIMD
Reconfiguration is a powerful computational model in which the processors can be changed dynamically during the execution phase of the system. This paper presents dynamic reconfigurable register file allocation in hyper pipelined OR1200 (Open RISC) for single instruction multiple data (SIMD). The OR1200 instantly reconfigures the actual register file to the reconfigurable register file according to the requirement of the application. The unused general purpose registers obtained during the reconfiguration process can be used for hyper pipelining technique which improves overall performance of the single core processor system. Thus releasing the unused register reduces the power consumption and increases the execution speed of OR1200. This proposed reconfigurable technique is implemented using Verilog and it is tested using MediaBench multimedia benchmark dataset which ensures reduced register utilisation of 16.80% for multimedia dataset and power reduction up to 72.7% with reconfigurable modules. The proposed technique is configured in Virtex-6 field programmable gate array (FPGA) and results are analysed with the existing and proposed reconfigured OR1200.

Modelling epidemic routing with heterogeneous infection rate
The epidemic routing has been integrated into many applications, ranging from the worm propagation of online social networks to the message diffusion in offline physical systems. Modelling epidemic routing provides a baseline to evaluate system performance; it also becomes very desirable for engineers to have theoretical guidance before they deploy the real system. Early works analyse the dynamics of epidemic routing with the average contact rate, i.e., each node will encounter the same number of other nodes in a time slot. They neglect the status of encountered nodes (i.e., infected or susceptible), resulting in the defectiveness of existing models. In this paper, we observe that the infectivity of nodes has heterogeneity rather than homogeneity, two nodes with the same contact rate may behave different infectivities. Motivated by this observation, we first use infection rate to reflect the infectivity of infected nodes. We then model the epidemic routing with the average infection rate, instead of the contact rate. We finally compare our model with the existing works through theoretical analysis and simulations. The results show that our model has a closer match than those of the state-of-the-art works, which provides an upper bound on the number of infected nodes.

Anonymous hierarchical identity-based encryption with bounded leakage resilience and its application
Hierarchical identity-based encryption can be used to protect the sensitive data in cloud system. However, as the traditional security model does not capture side-channel attacks, many hierarchical identity-based encryption schemes do not resist this kind of attack, which can exploit various forms of unintended information leakage. Inspired by these, leakage-resilience cryptography formalises some models of side-channel attacks. In this paper, we consider the memory leakage resilience in anonymous hierarchical identity-based encryption schemes. By applying Lewko et al.'s tools, we construct a master key leakage-resilient anonymous hierarchical identity-based encryption scheme based on dual system encryption techniques. As an interesting application of our scheme, we consider security for public-key encryption with multi-keyword ranked search (PEMKRS) in the presence of secret key leakage in the trapdoor generation algorithm, and provide a generic construction of leakage-resilient secure PEMKRS from a master key leakage-resilient anonymous hierarchical identity-based encryption scheme.

OGPADSM2: oriented-group public auditing for data sharing with multi-user modification
In most data sharing protocols for cloud storage, to update the outsourced data, the data updating operation is only executed by data owner. Obviously, it is far from practical owing to the tremendous computational cost on data owner. Until now, there are a few protocols in which multiple cloud users are allowed to update the outsourced data with integrity assurance. And these protocols do not consider collusion attack between misbehaving cloud servers and the revoked users, which is an important challenge in data sharing protocol. To support multi-user modification and resist collusion attack, we propose a novel public auditing for data sharing with multi-user modification in this paper. Our scheme does not support public checking and efficient user revocation, but provides backward security. At the same time, our scheme is provably secure under the bilinear Diffie-Hellman problem. To increase efficiency of the auditor's verification, an improved protocol is given. In the improved protocol, only one pairing operation is required in the auditor's verification phase. By comparison with the other protocols, our scheme has lower computation cost on the auditor and stronger security.

Process capability in terms of TSS in waste water treatment technology
Sequential batch reactor (SBR) technique is one of the waste water treatment methods that are presently used worldwide. Waste water is categorised according to biological oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS) and bacterial presence. The TSS is one of the important parameter in the waste water that needs analysis in the water treatment. This paper deals with the determination of process capability, process capability ratio and process capability index of total suspended solids (TSS); using moving range (MR) chart and run rules. The results show that the process capability of final observations of total suspended solids (TSS) is 1.37. The final process capabilities of TSS have been calculated by eliminating the out of control data points. Process capability index (Cpk) of final data of TSS is turned out to be 0.14.

Assessing the feasibility of waste management solutions based on sorting at source and recycling
The lack of a proper waste management system in Lebanon resulted in a major waste management crisis starting from July 2015. Arcenciel, a non-governmental organisation, started a pilot project in Bekaa and Mount Lebanon aiming at studying the feasibility of carrying out a waste management system centred on a source waste sorting and recycling process. Waste sorting was feasible. However, sorting into two categories would be more efficient than sorting into three categories. The waste generation rate was between 0.83 and 0.88 kg/capita/day. The major errors in waste sorting proved to be centred on non-organic waste sorting, whereas organic waste sorting only indicated small error percentages. The major composition of waste was found to be organic (62.10%), the percentage of recyclable waste being small (8.43%). More than 80% of the non-recyclable and non-organic waste proved to be suitable for energy recovery as refused derived fuel.

The assessment of waste source-separated system in Tehran and comparative analysis between collection systems by RIAM method
This study focuses on municipal solid waste (MSW) source separated collection system in Tehran in 2013. Tehran demonstrates a poor level in MSW source separated collection due to insufficient equipment, mismanagement as well as weak public awareness. To improve the current situation of waste collection system, a comparative assessment of MSW collection is done based on environmental sustainability between conventional collection system and its pneumatic alternative. The technique that we use to assess two approaches is rapid impact assessment matrix (RIAM) method. The physical-chemical (PC), social-cultural (SC), biological-ecological (BE) and economical-operational (EO) aspects of two methods were evaluated based on the experts' judgment. In pneumatic method both PC and SC indexes, are improved considerably compared to another method, and the improvement of BE and EO indexes are also obvious. According to the results, pneumatic method illustrates the most positive effects and it is recommended as a priority of MSW collection system in Tehran.

Indoor air pollution due to household use of olive cake as a source of energy
The aim of this study was to investigate the impact of using olive cake as a source of heating energy on indoor air quality under different conditions. The obtained results indicated that high concentrations of CO, H2S, Cl2, NOx and SO2 were generated and exceeded the recommended standards, while a moderate level of CO2 was detected. High concentrations of CO2, H2S, Cl2, NOx and SO2 were measured during the first 20 min of starting the combustion process, while the highest CO concentration occurred after 10 min. The modified combustor with mechanical feeding system provided better combustion and stable concentration of gases, while manual feeding caused fluctuations in the concentrations of gases. The study also revealed that the indoor concentrations of gases decrease as the distance from the combustor increases. Based on the results, many recommendations regarding the use of olive cake, the improvement of combustion process and public awareness were reported.

Role of automation in waste management and recent trends
With the technology human needs are growing and the need arises for a system to manage it, maintain it and make it hassle free for the users. Let us take an example suppose you have to take attendance of 100 students daily which is a tedious and time consuming job, instead we can have a bio metric machine which can be passed among students to record attendance of students. This is an application of automation. Well, this was an example at smaller scale. When we see things in terms of enterprises which is established worldwide it becomes an important need to automate the processes that are conventional and reduce the human effort. Waste management is a critical process which involves collecting and managing waste from all the areas of the city. There are still places where people go and throw waste and litter it. We can make this process of collecting the waste as automated, where a platform can be created for people who can file complaints regarding throwing of wastes. It can also have several CC cameras installed in these types of places to monitor and help the city keep clean.

Assessment of heavy metal contamination from municipal solid waste open dumping sites in Bangladesh
Co-disposal of household hazardous materials with municipal solid wastes (MSW) into the open dumping sites is the usual practice in Bangladesh. In this paper, characterization of heavy metals in MSW in open dumping sites at Matuail, Dhaka and Khulna is presented. MSW samples were collected and analyzed for total metals content (Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn) and metal fractions. The analysis results showed that total metals content in MSW at Matuail dumping site is higher than Khulna dumping site and the metals are predominantly associated with fine soil fraction. Both sites contain higher bio-available fraction of metals. The TCLP analysis result showed that the dumping sites are non-hazardous in nature in context of heavy metals pollution. Runoff leachate also contains insignificant concentration of metals. Under the present condition prevailing at both dumping sites, the dissolution of acid soluble metals and the associate risk is very low.

Fuzzy cluster and validity indices in a socio-economic context
The multidimensional nature of socio-economic hardship requires a multidimensional research approach, oriented toward advanced solutions, to be able to capture the changing dimensions of the problem at hand. One of such approaches consists of abandoning traditional dichotomous logic in favour of a semantically richer fuzzy classification, in which each unit belongs and, at the same time, does not belong to a given category. Cluster analysis allows us to identify the profiles of families who meet certain descriptive characteristics not defined a priori. The approach used in this work to synthesise and measure hardship conditions is based on a clustering procedure known as fuzzy clustering by local approximation of membership (FLAME), and based on defining the neighbourhood of each object and identifying cluster-supporting objects. This clustering method not only allows for each instance of a dataset to belong to a unique main cluster, but also that each instance can be shared by two or more clusters on the ground of suitably defined 'fuzzy profiles'.

An adaptive method for the performance evaluation of a concentrating solar power system using NN and CS
In this paper, the proposed adaptive technique is used for improving the performance of the concentrating solar power (CSP) system. Initially, the neural network (NN) generated the dataset based on the reference signal and the normal signal. Then the cuckoo search (CS) algorithm improved the effectiveness of the proposed algorithm and the weight and the biases can optimised. The novelty of this paper is to evaluate the performance of the CSP system and improve the effectiveness also of gathered maximum energy. Thereafter, to support the accuracy and efficiency of the system, parameters such as regression coefficient, root mean square error (RMSE) value and error variation are estimated. The proposed optimisation process is implemented in MATLAB/Simulink platform and also the result is estimated and analysed based on the error parameters of the CSP system. Based on these, the proposed optimisation process is evaluated and compared with other traditional methods.

An evaluation of four reordering algorithms to reduce the computational cost of the Jacobi-preconditioned conjugate gradient method using high-precision arithmetic
In this work, four heuristics for bandwidth and profile reductions are evaluated. Specifically, the results of a recent proposed heuristic for bandwidth and profile reductions of symmetric and asymmetric matrices using a one-dimensional self-organising map is evaluated against the results obtained from the variable neighbourhood search for bandwidth reduction heuristic, the original reverse Cuthill-McKee method, and the reverse Cuthill-McKee method with starting pseudo-peripheral vertex given by the George-Liu algorithm. These four heuristics were applied to three datasets of linear systems composed of sparse symmetric positive-definite matrices arising from discretisations of the heat conduction and Laplace equations by finite volumes. The linear systems are solved by the Jacobi-preconditioned conjugate gradient method when using high-precision numerical computations. The best heuristic in the simulations performed with one of the datasets used was the Cuthill-McKee method with starting pseudo-peripheral vertex given by the George-Liu algorithm. On the other hand, no gain was obtained in relation to the computational cost of the linear system solver when a heuristic for bandwidth and profile reduction is applied to instances contained in two of the datasets used.

Dynamic self-learning water-masking algorithm for AATSR, MERIS, and SPOT VEGETATION
Within the ESA CCI 'Fire Disturbance' project (Guenther et al., 2012), a dynamic self-learning water masking approach was developed for AATSR, MERIS-FR(S), MERIS-RR, and for SPOT VEGETATION (SPOT-VGT) data. The primary goal of the development was to find for all sensors a generic algorithm by combining static water masks on a global scale with a self-learning algorithm. Our approach results in the generation of a dynamic water mask which helps to distinguish burned areas from other dark areas as, e.g., cloud or topographic shadows or coniferous forests. The use of static water masks as training areas for the learning algorithm must take into account that small and shallow water bodies may change in time and that a precise geo-location of the static water mask and the scene under investigation is mandatory. The comparison of the water masks derived from all sensors for a region in Kazakhstan demonstrates the quality of the new dynamic water masks. In addition, the advantages to other water masking algorithms (MOD44W, Hansen_GFC or IDEPIX) are shown. Furthermore, the dynamic water masks of AATSR, MERIS and SPOT-VGT for the same region are presented and discussed together with the use of more detailed static water masks.

Predicting symbolic interval-valued data through symmetrical nonlinear regression
We proposed a symmetrical nonlinear regression model to fit interval-valued data. An important feature of this new model is that the estimate and prediction are less sensitive in the presence of outliers than a nonlinear model proposed in the literature. Monte Carlo simulation studies have been developed to investigate the performance of the model on different scenarios in precense of some percentage of outliers. The results based on the mean magnitude of the relative errors are presented and discussed. The model was fitted to one real symbolic dataset with noticeable interval outliers, and the forecast accuracy has been considered.

An efficient intrusion detection system for identification from suspicious URLs using data mining algorithms
The main objective of this paper is to design intrusion detection from suspicious URLs using optimal fuzzy logic system. Basically, the system consists of three modules such as: 1) feature extraction; 2) feature selection; 3) classification. At first, we extract the four kinds of feature from the dataset which have a total of 30 features. Among that, we select the important features using hybridisation of firefly and cuckoo search algorithm (HFFCS). Then, we train the selected features using fuzzy logic classifier and then we calculate the fuzzy logic score. Finally in testing, the fuzzy logic classifier detected the malicious URL based on the fuzzy score. In this work, we use two types of database such as URL reputation dataset and phishing websites dataset. The experimental results demonstrate that the proposed malicious URL detection method outperforms other existing methods.

Combining mobile technologies in environmental education: a Greek case study
In recent years, due to the widespread use of Information and Communication Technologies (ICTs) various technological tools and services have found application in education. The education community has recognised that mobile devices and their applications can be used for environmental education as well as in education for sustainable development. Quick Response (QR) codes are an example as they successfully integrate mobile learning technology in Environmental Education. When attached to an object, the QR codes add a layer of digital functionality that transforms and expands the way users of mobile devices access the information without temporal-spatial restrictions. In the present study, we present a didactic approach that was implemented by High School teachers in the field of Environmental Education using mobile devices and QR codes.

Investigating the variables influence women users intentions to use smartphones: evidences from emerging economies
The paper focused on identifying the variables that influence women users intentions to use smartphones in Sultanate of Oman. The study was conducted among 300 women users of smartphones through personal investigators. Five major constructs such as demographic, psychographic, social, cultural and usage variables have been selected and tested the influence of such variables over women users intentions to use smartphones. The Structural Equation Modelling has been used and found usage variables to have significant influence over women users intentions to use smartphones than that of other selected variables. Implications of such outcomes were also discussed.

Satisfaction of high school students with a mobile game-based English learning system
This study investigated the satisfaction of high school students with a mobile game-based English learning system. The system, called the Happy English Learning System (HELS), integrates learning material into a game-based context and was constructed and installed on mobile devices to conduct an experiment over a period of eight weeks. The experimental sample comprised 38 students. Through statistical analysis, the results confirmed the merits of a mobile game-based approach for high school students learning English. The findings included the following: (1) the adoption of a familiar, but challenging game design is a critical technical factor; (2) the students that were more interested in learning English were more satisfied with this approach; (3) regardless to what extent the students were interested in playing games or enjoyed playing the HELS, they acknowledged satisfaction with the system and the learning process; and (4) the more time the students spent playing the HELS per week, the more satisfied they were with the system and the learning process. Finally, several suggestions are proposed for future applications.

Prepare your own device and determination (PYOD): a successfully promoted mobile learning mode in Taiwan
During the process of promoting mobile learning on a large scale, despite cultivating teachers' ability to design mobile learning activities, many details in their practices will affect the promotion results of mobile learning. In Taiwan, teachers usually need to spend much time on managing and maintaining the mobile devices purchased by the schools, which could significantly decrease their willingness to participate in mobile learning programs. To cope with these problems, the Prepare Your Own devices and Determination (PYOD) is proposed and implemented in the Taiwan mobile learning promotion program for high schools. In this paper, the implementation levels of mobile learning in the selected high schools are analysed and reported. It is found that PYOD has been well accepted and implemented by most schools; moreover, following the proposed mobile learning model and strategies, many schools have reached the implementation level that engages students in higher order thinking.

The combined theory of planned behaviour and technology acceptance model of mobile learning at Tehran universities
With the growth and progress of science and the constant development of new branches in the field of technologies, enormous changes have occurred in the dominion of education and learning. When observed in the narrow sense of electronic technologies, learning has outgrown its traditional frames and the electronic format has established its dominating role. Mobile learning can be observed as an emerging technology welcomed by various organisations in the present. This study examined the effect of existing factors in the composite-structural model of the theory of technology acceptance and planned behaviour on the acceptance of mobile learning by students of Tehran universities. To this end, 170 questionnaires were distributed and collected at all of the universities/each university of Tehran. Study results indicated that 85.7% of students have accepted mobile learning. Additionally, some of the survey's aspects, such as attitudinal factors, controlling beliefs factors and self-controlling beliefs, have led to a positive effect on the individual's actual behaviour in the acceptance of mobile learning.

Predicting microbial interactions from time series data with network information
The evolution of biotechnological knowledge poses some new challenges to study microbial interactions. Vector autoregressive (VAR) model was proved to be an efficient approach to infer dynamic interactions in biological systems. However, high-throughput metagenomics or 16S-rRNA sequencing data is high dimension, which means that the number of covariates is much larger than the number of observations. Reducing the dimension of data or selecting suitable covariates became a critical component VAR modelling. In this paper, we develop a graph-regularised vector autoregressive model incorporating network information to infer causal relationships among microbial entities. The method not only considers the signs of the network connections among any two covariates, but also constructs a network weighted matrix by microbial topology information. The coordinate descent algorithm for estimating model parameters improves the accuracy of prediction. The experimental results on a time series data set of human gut microbiomes indicate that the proposed approach has better performance than other VAR-based models with penalty functions.

Implementing computational biology pipelines using VisFlow
Data integration continues to baffle researchers even though substantial progress has been made. Although the emergence of technologies such as XML, web services, semantic web and cloud computing have helped, a system in which biologists are comfortable articulating new applications and developing them without technical assistance from a computing expert is yet to be realised. The distance between a friendly graphical interface that does little, and a 'traditional' system though clunky yet powerful, is deemed too great more often than not. The question that remains unanswered is, if a user can state her query involving a set of complex, heterogeneous and distributed life sciences resources in an easy to use language and execute it without further help from a computer savvy programmer. In this paper, we present a declarative meta-language, called VisFlow, for requirement specification, and a translator for mapping requirements into executable queries in a variant of SQL augmented with integration artefacts.

Prediction of DNA-binding residues from sequence information using convolutional neural network
Most DNA-binding residue prediction methods overlooked the motif features which are important for the recognition between protein and DNA. In order to efficiently use the motif features for prediction, we first propose to use Convolutional Neural Network (CNN) in deep learning to extract discriminant motif features. We then propose a neural network classifier, referred to as CNNsite, by combining the extracted motif features, sequence features and evolutionary features. The evaluation on PDNA-62, PDNA-224 and TR-265 shows that motif features perform better than sequence features and evolutionary features. The evaluation on PDNA-62, PDNA-224 and an independent data set shows that CNNsite also outperforms the previous methods. We also show that many motif features composed by the residues which play important roles in DNA-protein interactions have large discriminant powers. It indicates that CNNsite has very good ability to extract important motif features for DNA-binding residue prediction.

Concod: an effective integration framework of consensus-based calling deletions from next-generation sequencing data
Detection of structural variations such as deletion with short sequence reads from next-generation sequencing is a significant but challenging problem in the field of genome analysis. This paper proposes a conceptual framework to improve the effects of calling deletions. Although the genetic sequencing tools are massively produced for the moment, not a single method clearly outperforms all other methods. At present, a widely used way of deletion detection is merging, which combined all the features to achieve more accurate deletion calling. However, most existing methods using the combining approach are heuristic and the called deletions by these tools still contain many wrongly called deletions. In this paper, we introduce Concod, an effective integration framework using machine learning to detect deletions. First, Concod collects the candidate deletions from multiple existing deletion detection tools. Then, based on the multiple detection theories, the features of candidates are extracted from sequence. Last, according to these features, a machine learning model is trained to distinguish the true and false candidates. We test our framework on different coverage of real data and make a comparison with other existing tools, including Pindel, SVseq2, BreakDancer and DELLY. Results show that Concod improves both precision and sensitivity of deletion detection significantly.

A novel method to measure the semantic similarity of HPO terms
It is critical yet remains to be challenging to make precise disease diagnosis from complex clinical features and highly heterogeneous genetic background. Recently, phenotype similarity has been effectively applied to model patient phenotype data. However, the existing measurements are revised based on the Gene Ontology-based term similarity models, which are not optimised for human phenotype ontologies. We propose a new similarity measure called PhenoSim. Our model includes a noise reduction component to model the noisy patient phenotype data, and a path-constrained Information Content-based method for phenotype semantics similarity measurement. Evaluation tests compared PhenoSim with four existing approaches. It showed that PhenoSim, could effectively improve the performance of HPO-based phenotype similarity measurement, thus increasing the accuracy of phenotype-based causative gene prediction and disease prediction.

An improved single neuron self-adaptive PID control scheme of superheated steam temperature control system
The control unit of superheated steam temperature of thermal power plants has poor quality, and easy over-temperature problem. In this paper, according to the single neuron adaptive controller with self-learning, strong adaptability, high robustness and fast response, an improved single neuron self-adaptive proportional-integral-derivative (PID) control scheme of superheated steam temperature control system has been presented. The proposed control scheme has two main characteristics: 1) Compared with the traditional PID control scheme, the three PID control parameters of proportional, integral, differential coefficients become a neuron adaptive control coefficient K; 2) This proposed control strategy has provided a new theoretical basis and research methods in the application of control system of superheated steam temperature. Simulation results show that the control scheme has improved the control performance of large time delay, multi-disturbance, and has reflected the strong robustness, high stability and good control quality.

Formation of heterogeneous multi-agent systems under min-weighted persistent graph
In this paper, we develop a simple and efficient formation control framework for heterogeneous multi-agent systems under min-weighted persistent graph. As the ability of each agent may be different, the architecture of agents is considered to be heterogeneous. To reduce the communication complexity of keeping connectivity for agents, a topology optimisation scheme is proposed, which is based on min-weighted persistent graph. According to the topology of agents, a directed acyclic graph (DAG) is constructed to reflect the signal flow relation of agents, and then the corresponding formation control protocol is designed by using the transfer function model. Apply the proposed method, it is shown that the communication complexity of multi-agent systems is decreased, and the connection safety is improved. Based on signal flow graph analysis and Mason's rule, the convergence conditions are provided to show the agents can keep a formation. Finally, several simulations are worked out to illustrate the effectiveness of our theoretical results.

Single-wheel robot modelling using natural orthogonal complement
The modelling on SWR is a foundation work for researchers to carry out further study, thus is very important. The early modelling on SWR are mostly done by the Euler-Lagrange equations, and the Euler angles are used to calculate the attitude of SWR. The method of Euler-Lagrange equations has to calculate a lot of quadratic terms and partial differential terms, while the Euler angles are known to be prone to singularities. This paper proposed a novel modelling on SWR by using nature orthogonal complement, which is simpler and more efficient in computation than Euler-Lagrange equations. Further, the quaternions was employed to calculate the attitude of SWR, thus free of singularity problem. Moreover, the radius of the toroidal wheel has been taken into account on the model firstly, so the mathematical model is more close to the real physical situation. The dynamic modelling results have been verified by numerous experiments.

Development of algorithm for different programmable modes for a prototype of orthotic ambulatory device for gait rehabilitation
Presently available orthotic devices have many limitations in their mechanical design, sensor mechanism and real-time control to achieve stable, efficient and human like bipedal walking. User adaptive control algorithm with different programmable modes for developed prototype of an orthotic device is implemented. Gait postures like sit, stand, walk, stair walk, user defined exercise mode, etc., are basic human mobility functions. These activities are part of rehabilitation therapy for a locomotive disabled. With emerging trend of robotic rehabilitation, orthotic devices with good control strategy that helps in early recovery and more effective in gait rehabilitation are required. Control strategy is based on gait parameters like lower limb joint angles and ground reaction force. Software algorithm is developed in LabVIEW platform for different modes like sit, stand, walk and stair climbing. Implemented control strategy is verified based on the trajectory error estimated of hip, knee and ankle of the device are analysed. These trajectories are compared with the standard available data.

Reverse logistics network design under greenness, reliability and refurbished product demand considerations
The aim of this paper is to present a new closed-loop supply chain (CLSC) network design model for multi-component and multi-product systems. This integrated model is more representative of industrial operations and takes into account the following important design factors: multi-component products, design for disassembly, existence of a secondary market for refurbished products, bill of material for product decomposition into parts, reliability and greenness levels of parts/products. A mixed-integer programming model is developed for a forward/reverse logistic network that includes suppliers, customer zones, inspection, repair and disassembly centres (IRDC), and a recycling centre. A wide range of numerical experiments are conducted and sensitivity analyses are carried out on various parameters yielding valuable managerial insights. The model is used to expose the effects of parts reliability and product greenness on the reverse flow and the economical operating market range for refurbished products in the reverse flow.

High-reliability hash-chain based broadcast method in wireless network for factory automation
Authentication is important to broadcast, which is a widely-used communication method in wireless network for factory automation. Traditional hash-chain based methods maintain long key chains for authentication. However, these methods face the trade-off between the broadcast amount supported and huge storage of the key chain. This paper presents a high-reliability secure broadcast method TuTESLA based on multiple hash chains. First, multiple-transmission is applied to improve the reliability of broadcast; second, multiple hash chains are introduced and the structure of them are optimised to eliminate the dependence between the length of chains and the amount of the broadcasts supported. Compared with the traditional methods based on hash chain, in the security aspect, TuTESLA has comparable message security and key security, and higher replay-attack protection, but weaker resistance to denial of service attacks; in the overhead aspect, TuTESLA has much lower storage overhead, but slightly higher computing and communication overhead.

Towards a framework for a resilient supply chain in a turbulent environment: a review of its drivers
In recent years, unexpected events such as natural disasters, including earthquakes and tsunamis, as well as revolutions and acts of vandalism have become frequent. Events such as these can break down the supply chain flow. The supply chain's capacity to recover quickly and to re-establish its flow is characterised by the supply chain's resilience. The resilient supply chain receives significant attention from managers and researchers operating in the supply chain field. This study focuses on developing a framework for a resilient supply chain in a turbulent environment when it is faced with unexpected events. For this purpose, the four supply chain drivers - inventory, transportation, facilities, and information - will be exploited. This framework will be used to reinforce the supply chain resilience and to minimise the negative impact of unexpected events thereby favouring a quick return to business. This work contributes to our understanding of supply chain dynamics in a turbulent environment.

Spreadsheet-based modelling for out of matrix cost in inbound logistics
The purpose of the study was to estimate non-budgeted cost incurred by an FMCG company in its inbound logistics operations for a selected product category. We conducted the study through action research approach, and collected primary data for all its shipment made from all the manufacturing units of the company to its distributors across PAN India. A quantitative model was developed to estimate the logistics cost for six-month period and a spreadsheet-based model was designed to generate reports to provide ease of use to decision makers/managers. The study identified the lack of coherence in logistics planning and implementation stage, resulting in excess than the budgeted expenditure. The present research indicated that the out-of-matrix cost constituted an average 25% of the total transportation cost for the study period. We identified the potential causes for such inefficiencies and recommended to use developed model in getting a more realistic scenario through the developed model.

Traffic impact analysis on Paris and suburbs ways using BFSIS model
In current paper are presented an original approach which is deal with the analysis of the traffic flow density and the travel time as additional indicators using a new extension of the well-known breadth-first search (BFS) algorithm. The average road grade and the average legal speed on the road as well as weather conditions are taken into account. Addition to these indicators, this paper is focused on the road traffic measurement as a supplementary parameter required when calculating the electric vehicle energy consumption.

Examining corporate social responsibility as a public relations vehicle: an empirical study
The purpose of this study was to examine the perceptions of public relations and marketing managers and administrators on corporate social responsibility (CSR) practices and strategy in the service industry in Europe for 25 firms. The research is qualitative in nature, and aims to investigate how social initiatives focused on the CSR strategy of a company are perceived, positioned, and deployed to maximise, simultaneously, internal benefits (i.e., financial performance) and external benefits to society (including a firm's stakeholders), discussing also directions for future studies in the area of CSR in Europe.

Corporate governance and corporate social responsibility disclosure: evidence from Saudi Arabia
This study aims to discover the corporate social responsibility (CSR) disclosure practices and the potential influence of corporate governance (CG), ownership structure, and corporate characteristics, in an emerging Arab country, Saudi Arabia. This study extends the extant literature by investigating the drivers of CSR disclosure in a country that lacks research in this area. This study examines 267 annual reports of Saudi non-financial-listed firms during 2007-2011 using manual content and multiple regression analyses and a checklist of 17 CSR disclosure items based on ISO 26000. The analysis finds that the CSR disclosure average is 24%, higher than 14.61% and 16% found by Al-Janadi et al. (2013) and Macarulla and Talalweh (2012) for two Saudi samples during 2006-2007 and during 2008, respectively. This improvement may be due to the application of Saudi CG code in 2007. The analysis also shows that government and family ownership, firm size, and firm age are positive determinants of CSR disclosure, firm leverage is a negative determinant, while effective AC, board independence, role duality, institutional ownership, firm profitability, and industry type are found not to be determinants of CSR disclosure. This study is important because it uses agency theory to ascertain the influence of specific board characteristics and ownership structures on disclosure. As a result it provides important implications for CG regulators and different stakeholders and provides an evaluation of the recently applied Saudi CG code from CSR disclosure perspective.

Equator Principles reporting: factors influencing the quality of reports
This study analyses the reporting of Equator Principles Financial Institutions (EPFI). The Equator Principles are a voluntary code of conduct, providing guidelines for assessing, managing, and reporting environmental and social impacts in project finance. The objective of the study is: 1) to understand, whether EPFIs follow the Equator Principles reporting guidelines; 2) to assess the quality of the mandatory reports of the EPFIs; 3) to analyse causes for differences in reporting. Because the Equator Principles are a voluntary code of conduct, or a so-called soft law, the research has been based on institutional theory. Our results suggest that though EPFIs follow the reporting guidelines, only about 5% disclose all the information required by the guidelines and consequently achieve the highest score with respect to their reporting quality. Furthermore, differences in reporting quality are mainly caused by the size of the EPFIs. The larger the EPFI with respect to its total assets the higher is the reporting quality. We conclude that further mechanisms, such as standardisation and assurance, are needed to guarantee transparent reporting of environmental and social project risks.

Corporate sustainability performance and firm performance: evidence from India and South Korea
This paper examines the association between corporate sustainability performance and firm performance in India and South Korea from different prospectives. The study is based on the sample of 28 listed non-financial firms from India and 26 from South Korea for a period of six years from 2008-2009 to 2013-2014. Market to book ratio is used to measure the firm performance. Content analysis is employed for calculating the disclosure score relating to sustainability performance using the reporting format of Global Reporting Initiatives. Employing appropriate regression models, the results reveal positive and significant association between CSP (both in terms of level and quality) and MBR for both countries. The study also finds significant impact of all the three components of corporate sustainability on MBR. Moreover, the relative influence of CSP on firm performance is foundto be more in South Korea than in India.

Sustainability in the hierarchy: how corporate sustainability is anchored in the organisational structure
The purpose of this article is to contribute to our understanding of corporate sustainability practice by looking at where and how corporate sustainability is anchored in the organisational structure. An empirical case study finds three main anchors for corporate sustainability: with the board of directors, in the executive management, and in a specialised corporate sustainability unit. Based in case study findings, the article explores how the different anchors practice their corporate sustainability mandates. Directing attention to the different anchor points can increase chances of successfully integrating sustainability into core business practices, a main goal of strategic corporate sustainability. This study contributes to the literature by illuminating where responsibility for corporate sustainability is placed in the organisational structure, and exploring key tasks and practices for each of the anchor points.

Modelling customer satisfaction and customer loyalty in the frame of telecommunications industry: a review
The purpose of this paper is to review the existing literature on the modelling of customer satisfaction and loyalty determinants in the context of the telecommunications industry. The available literature on the development and treatment of various models of customer satisfaction and customer loyalty in the context of Telecom service sector is reviewed systematically. This allowed us to compare the findings from different studies carried out in different geographies on this subject. The review provides underlying patterns of relationships and interplays between customer satisfaction, customer loyalty and their influencing factors. The paper vividly examines the areas wherein the reviewed studies taken from varied geographical areas of the world stand in consensus with each other and the areas wherein they differ in opinion. Such understanding is relevant for academicians and researchers for furthering the work in this field. The insights into the previous studies, considered for this paper, are discussed and suggestions for future research are provided which paves the way of further empirical analysis of the insights thus brought to surface by this paper.

Investor's overconfidence and trading volume in the Tunisian market
A sample of 35 Tunisian companies is designed to study the relationship between investors' overconfidence and trading volume by distinguishing it from the disposition effect. We use VAR model in two versions. VAR market model shows a significant and positive correlation between past market return and current market turnover. This result validates overconfidence hypothesis and disposition effect. Therefore, we use a securities VAR model that shows a significant and positive correlation between past market return and current securities turnover in presence of securities returns for some companies. This result validates overconfidence hypothesis and not disposition effect. Other companies are characterised by a disposition effect and not by overconfidence since securities VAR model shows a significant and positive correlation between individual past returns and their current turnover in presence of past market returns. We conclude that the exchange activity is not a simple summation of disposition effect of individual securities.

Effect of social responsibility and service quality on customer loyalty: the mediating role of perceived benefits and satisfaction
The purpose of this research is to understand how social responsibility and service quality can affect customer loyalty in the context of education system. A comprehensive literature review is conducted to develop conceptual model in the context of education. A self-administered questionnaire survey was employed and the target population are the parents of students in Karegari College of Tehran in 2015. A sample of 130 people was selected based on random sampling approach. Correlation test and structural equation modelling were utilised to analyse data. The results indicated that college social responsibility and service quality have significant influence on both perceived benefits and satisfaction. Further, perceived benefits have affected satisfaction with the college and consequently, loyalty to the college.

Developing strategic relationships for religious tourism businesses: a systematic literature review
The purpose of this paper is to systematically analyse the existing literature on the interrelationships of tourism businesses in the context of religious tourism. By taking a systematic approach to literature review, a total of 34 academic articles was content analysed. As a general framework, Watkins and Bell's classification was applied in order to increase the review consistency and validity. The results show that the inter-relationships of tourism businesses have gained increasing academic attention in the past decade in terms of the number of published articles. The various categories of inter-organisational relationships often used in relation to tourism business setting are explored. The timeline of business relationships research in tourism literature are presented along with a methodological approach to the literature. The most of existing articles claim to study the cooperative relationships of tourism businesses. The main methodological approach used in studying inter-organisational relationships of tourism businesses was empirical one.

Managerial competence and financial performance of SMEs: the contingent role of stakeholder engagement
This paper examines the moderating role of stakeholder engagement on the relationship between managerial competence and financial performance. Using a survey-based approach, the study examined 423 small and medium scale firms operating in Ghana, a Sub-Saharan African country. The findings indicate that, stakeholder engagement does not assist managerial competence in having a positive impact on financial performance. However, both independent variables, acting separately, have a positive and significant relationship with financial performance. Hence, it was recommended that SMEs should invest more at employing competent managers or in training existing managers to become more competent as that alone, without the assistance of stakeholder engagement, can improve financial performance.