Subscribe: Inderscience
Added By: Feedage Forager Feedage Grade B rated
Language: English
algorithm  analysis  approach  based  method  model  paper  performance  process  proposed  quality  results  study  system 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Inderscience


This New Articles Channel contents the latest articles published in Inderscience's distinguished academic, scientific and professional journals.


Reverse logistics operations in a pharmaceutical retail environment
Not all the sales/deliveries at consumers'/intermediaries' end are always final. They may return it for a number of reasons. Consumers'/intermediaries' may also often find some medicines undesirably accumulated. Retailers may have such accumulations for poor sales, unexpected customer returns, perishability, etc. Therefore, another similar flow (reverse logistics) is required to carry such medicines backward for recovering value/proper disposal. This research aims at providing a detailed account of reverse logistics management of undesired medicines, various related issues and its performance, with respect to Indian pharmaceutical retail environment. What is being done (practices), why it is being done (drivers), how it is being done (return conditions) and what inhibits (barriers) are the major issues examined. Besides, the performance of reverse logistics programs was evaluated from the retailers' perspectives. The findings of this study are expected to establish some evidence regarding the prevalent reverse logistics practices and related issues for academicians and practitioners.

Determinants of profitability in the Indian logistics industry
The Indian economy has one of the highest transportation and logistics cost as a percentage of gross domestic product (13%) globally. This paper analyses trends in profitability and discusses some key macro and micro level factors influencing the Indian logistics industry comprising road transport logistics, storage and distribution. It discusses the role of macroeconomic factors such as tax policy in influencing the logistics network complexity, which in turn increase logistics costs. At a micro level, the paper uses firm-level data of 201 companies from Prowess database and estimates an econometric model to analyse major determinants of profitability in the logistics sector. The study finds that liquidity, market share, debt-equity, and age are significant determinants of profitability in the logistics sector.

Enablers of warning and recovery capabilities in supply chains: an empirical study
There is a growing challenge for the supply chain firms to address the uncertainties in a positive manner. Research on resilience and risk mitigation strategies have undersigned two effective capabilities: warning and recovery. However, the literatures on the development of these capabilities are yet to be developed. The current study adds to this nascent literature by exploring the role of organisation culture and lean production processes in generating warning and recovery capability. Survey instruments were developed based on the established sources (with necessary adaptation) and were utilised for the survey. 212 completed responses were obtained using the online survey and were analysed using the partial least squares. Findings showed that the organisation culture positively influences warning capability and recovery capability. Lean production processes although positively contributes for the development of the recovery capability; its influence on warning capability was not supported. Implications for the managers were also provided.

How logistics performance promote the international trade volume? A comparative analysis of developing and developed countries
This paper analyses how logistics performance affects international trade volume and compares the different effects between developing and developed countries by employing a gravity model with panel data from 43 countries in 2010, 2012 and 2014. The findings show that an improvement of logistics performance index (LPI) has more impact on export volume than on import volume. And it has a more powerful influence on developed countries' trade volumes than on developing countries' trade volumes. To improve the competitiveness of developing countries' exports in a global economy, developing countries should first and foremost prioritise improvement in procedural sectors like the customs process, tracking, and infrastructure.

Trade concentration and dynamics of the Norwegian imports: an application of R-MANOVA model
This article proposes to analyse trade concentration and dynamics of the Norwegian import's expenditures by applying the two-way random effect MANOVA (R-MANOVA) model. The MANOVA model factors considered in this econometric analysis are origin continents or countries (spatial effects) and the business cycles (dynamic effects). The R-MANOVA model fit estimation results confirms that the Norwegian import trade is sustainable in both short and long run controlling for the effect of both origin continent and business cycles. More importantly, the expenditure and the share of Norwegian imports across the continents show considerable dynamics. The overall econometric estimation results suggest that across all continents the Norwegian import expenditure is increasing with time. However, the share of the Norwegian import expenditures across continents is relatively stable. The analysis confirms that European exporters will be the leading partners for Norwegian import expenditures in future trade patterns. The ranking of the remaining continents in descending order will be Asia and Oceania, North and Central America, South America and Africa.

Cricket chirping algorithm: an efficient meta-heuristic for numerical function optimisation
Nature-inspired meta-heuristic algorithms have proved to be very powerful in solving complex optimisation problems in recent times. The literature reports several inspirations from nature, exploited to solve computational problems. This paper is yet another step in the journey towards the utilisation of natural phenomena for seeking solutions to complex optimisation problems. In this paper, a new meta-heuristic algorithm based on the chirping behaviour of crickets is formulated to solve optimisation problems. It is validated against various benchmark test functions and then compared with popular state-of-the-art optimisation algorithms like genetic algorithm, particle swarm optimisation, bat algorithm, artificial bee colony algorithm and cuckoo search algorithm for performance efficiency. Simulation results show that the proposed algorithm has outperformed its counterparts in terms of speed and accuracy. The implication of the results and suggestions for further research are also discussed.

Optimising the stiffness matrix integration of n-noded 3D finite elements
The integration of the stiffness and mass matrices in finite element analysis is a time consuming task. When dealing with large problems having very fine discretisations, the finite element mesh becomes considerably large and several thousands of elements are usually needed. Moreover, when dealing with nonlinear dynamic problems, the CPU time required to obtain the solution increases dramatically because of the large number of times the global matrix should be computed and assembled. This is the reason why any reduction in computer time (even being small) when evaluating the problem matrices is of the most concern for engineers and analysts. The integration of the stiffness matrix of n-noded high-order hexahedral finite elements is carried out by taking advantage of some mathematical relations found among the nine terms of the nodal stiffness matrix, previously found for the more simple brick element. Significant time savings were obtained in the 20-noded finite element example case.

A cost effective graph-based partitioning algorithm for a system of linear equations
There are many techniques for reducing the number of operations in directly solving a system of sparse linear equations. One such method is nested dissection (ND). In numerical analysis, the ND algorithm heuristically divides and conquers a system of linear equations, based on graph partitioning. In this article, we present a new algorithm for the first level of such graph partitioning, which splits a graph into two roughly equalised subgraphs. The algorithm runs in almost linear time. We evaluate and discuss the solving costs by applying the proposed algorithm to various matrices.

A based-on-set-partitioning exact approach to multi-trip of picking up and delivering customers to airport
Picking up and delivering customers to airports (PDCA) is a new service provided in China. The multi-trip mode of PDCA (MTM-PDCA) service is a promising measure to reduce operation costs. To obtain the exact solution, we propose a novel modelling approach including two stages. In the first stage, all feasible trips of each subset of the customer point set are produced, and then the two local optimum trips of each subset can be obtained easily. Subsequently, using the local optimum trips obtained in the first stage, we establish the novel trip-oriented set-partitioning (TO-SP) model to formulate MTM-PDCA. The MTM-PDCA based on the TO-SP model can be solved exactly by CPLEX. By testing extensive instances, we summarise several managerial insights that can be used to successfully reduce the costs of PDCA by using multi-trip mode.

Reliability prediction and QoS selection for web service composition
The key issues in the development of web service composition are the dynamic and efficient reliability prediction and the appropriate selection of component services. In this paper, we discuss web service composition in two ways: reliability prediction and QoS optimal selection. Firstly, we propose a reliability prediction model based on Petri net. To address the complex connecting relationship among subservices, places of basic Petri net for input and output are extended to some subtypes for multi-source input place and multiuse output place. Secondly, we use a new skyline algorithm based on an R-tree index. The index tree is traversed to judge whether it is dominated by the candidate skyline sets. Experimental evaluation on real and synthetic data shows the effectiveness and efficiency of the proposed approach.

Towards optimisation of replicated erasure codes for efficient cooperative repair in cloud storage systems
The study of erasure codes in distributed storage systems has two aspects: one is to reduce the data redundancy and the other one is to save the bandwidth cost during repair process. Repair-efficient codes are investigated to improve the repair performance. However the researches are mostly in theoretical stage and hardly applied in the practical distributed storage systems like cloud storage. In this paper, we present a unified framework to describe some repair-efficient regenerating codes in order to reduce the bandwidth cost in regenerating the lost data. We build an evaluation system to measure the performance of these codes during file encoding, file decoding and individual failure repairing with given feasible parameters. By the experimental comparison and analysis, we validate that the repair-efficient regenerating codes can significantly save much more repair time than traditional erasure codes during repair process at the same storage cost; in particular, some replication-based erasure codes can perform better than others in some certain cases. Our experiments can help researchers to decide which kind of erasure codes to use in building distributed storage systems.

Signal prediction based on boosting and decision stump
Signal prediction has attracted more and more attention from data mining and machine learning communities. Decision stump is a one-level decision tree, and it classifies instances by sorting them based on feature values. The boosting is a kind of powerful ensemble method and can improve the performance of prediction significantly. In this paper, boosting and decision stump algorithm are combined to analyse and predict the signal data. An experimental evaluation is carried out on the public signal dataset and the experimental results show that the boosting and decision stump-based algorithm clearly improves performance of signal prediction.

A matching approach to business services and software services
Recent studies have shown that service-oriented architecture (SOA) has the potential to revive enterprise legacy systems (Cai et al., 2011; Gaševic and Hatala, 2010; De Castro et al., 2011; Chengjun, 2008; Elgedawy, 2009; Tian et al., 2007; Chen et al., 2009; Zhang et al., 2006; Sindhgatta and Ponnalagu, 2008; Khadka, 2011), making their continued service in the corporate world viable. In the process of reengineering legacy systems to service-oriented architecture, some software services extracted in legacy systems can be reused to implement business services in target systems. In order to achieve efficient reuse to software services, a matching approach is proposed to extract the software services related to specified business services, where service semantics and structure similarity measures are integrated to evaluate the similarity degree between business service and software services. Experiments indicate that the approach can efficiently map business services to relevant software services, and then legacy systems can be reused as much as possible.

A new model of vehicular ad hoc networks based on artificial immune theory
Vehicular ad hoc networks (VANETs) are highly mobile and wireless networks intended to aid vehicular safety and traffic monitoring. To achieve these goals, we propose a VANET model based on immune network theory. Our model outperforms the delay tolerant mobility sensor network (DTMSN) model over a range of node numbers in terms of data packet arrival delay, arrival ratio, and throughput. These findings held true for the on-demand distance vector and connection-based restricted forwarding routing protocols. The model performed satisfactorily on a real road network.

Using online dictionary learning to improve Bayer pattern image coding
Image quality is a fundamental concern in image compression. There is a lot of noise in image compression process, which may impact on users not getting precise identification. It has, thus, always been neglected in image compression in the past researches. In fact, noise takes a beneficial role in image reconstruction. In this paper, we chose noise as considered and recommended as a coding method for Bayer pattern image based on online dictionary learning. Investigations have depicted that the proposed method in Bayer pattern image coding might develop the rate of distortion performance of Bayer pattern image coding at any rate.

Feature binding pulse-coupled neural network model using a double colour space
The feature binding problem is one of the central issues in cognitive science and neuroscience. To implement a bundled identification of colour and shape for a colour image, a double-space vector feature binding PCNN (DVFB-PCNN) model was proposed based on the traditional pulse-coupled neural network (PCNN). In this model, the method of combining RGB colour space with HSI colour space successfully solved the problem that all colours cannot always be separated completely. Through the use of the first pulse emission time of the neurons, the different characteristics were successfully separated. Through the colour sequence produced by this process, the different characteristics belonging to the same perceived object were bound together. Experiments showed that the model can successfully achieve separation and binding of image features and will be a valuable tool for PCNN in the feature binding of colour images.

A proposed grey fuzzy multi-objective programming model in supplier selection: a case study in the automotive parts industry
One of the research areas which have drawn authors' attention is suppliers' evaluation and quota allocation. In previous studies, suppliers have been assessed mostly by multi-objective decision making models when there are conflicting objectives. In this paper, the problems of supplier selection and quota allocation have been investigated based on conflicting criteria, where the accessible data are imprecise. One of the greatest achievements of the current study is selecting the superior suppliers from a large number in terms of conflicting objectives. Given that the data are defined as grey numbers, the grey multi-objective programming (MOP) model has been solved by goal programming (GP) method where aspiration levels are fuzzy. The proposed approach has been evaluated by studying a real case study in the automotive parts manufacturer. And finally, a sensitivity analysis has been carried out to figure out the effect of variation of criteria's values on the supplier quotas.

Selecting and scheduling interrelated projects: application in urban road network investment
Decisions about the selection of projects, alternatives, investments, operating policies and their implementation schedules are major subjects in various fields including operations research, financial analysis, business management, engineering economy and transportation planning. In these various disciplines sufficiently good methods have been developed for planning and prioritising projects when interrelations among those projects are negligible. However, methods for analysing interrelated alternatives are still inadequate. We propose a combinatorial method for evaluating and scheduling interrelated roadnetwork projects. In particular, this paper demonstrates how a traffic assignment model can be combined effectively with a genetic algorithm (GA) in a multi-period analysis to select and schedule road network projects while capturing interactions among those projects. The goal is to determine which projects should be selected and when they should be funded in order to minimise the present value of total system cost over a planning horizon, subject to budget flow constraints.

λ-GRASP with bi-directional path relinking for the bi-objective orienteering problem
This paper presents a new approach to solve the bi-objective orienteering problem (BOOP). The BOOP is a multi-objective extension of the well-known orienteering problem (OP). The multi-objective aspect stems from the personalised tourist routes planning problem, in which each point of interest in a city provides different profits associated with different categories. The aim of the BOOP is to find routes satisfying a given travel cost restriction, and visiting some points of interest that maximise the total collected of different profits. To generate a good approximation of Pareto-optimal solutions, we develop a new metaheuristic method based on hybridisation of λ-GRASP and a new variant of the path relinking procedure called bi-directional path relinking (BDPR). The latter is used as an intensification phase, with the goal to obtain new solutions that can eventually be part of the set of the Pareto-optimal solutions. The proposed approach is tested on benchmark instances taken from the literature. It is compared with the Pareto ant colony optimisation algorithm (P-ACO) and the variable neighbourhood search method (VNS). Computational results show that, compared to the P-ACO and the VNS procedures, the proposed method provide a good approximation of the Pareto front for the bi-objective orienteering problem.

Optimising replenishment policy in an integrated supply chain with controllable lead time and backorders-lost sales mixture
This paper aims to optimise the inventory replenishment policy in an integrated supply chain consisting of a single supplier and a single buyer. The system under consideration has the features such as backorders-lost sales mixture, controllable lead time, stochastic demand, and stockout costs. The underlying problem has not been studied in the literature. We present a novel approach to formulate the optimisation problem, which is able to satisfy the constraint on the number of admissible stockouts per time unit. To solve the optimisation problem, we propose two algorithms: an exact algorithm and a heuristic algorithm. These two algorithms are developed based on some analytical properties that we established by analysing the cost function in relation to the decision variables. The heuristic algorithm employs an approximation technique based on an ad-hoc Taylor series expansion. Extensive numerical experiments are provided to demonstrate the effectiveness of the proposed algorithms.

Green supply chain management in Indian automotive sector
The burden on automotive companies to embrace green processes has increased significantly in recent years. A review of the existing literature has highlighted a need to understand how green supply chain management (GSCM) practices can contribute to improving company performance from environmental, economic and operational. This research aims to test the relationship between GSCM practices and performance for those companies adopted or planning to adopt GSCM in an emerging economy such as India. Outcome of the testing shows GSCM practices varies against companies' performance. It implies that companies appear to have unsuccessful to understand the link between GSCM practices and performance.

Inventory and production planning for component remanufacturing in an original equipment manufacturer-closed loop supply chain
In this study, an OEM assembling a single product using multiple-components is considered. The components are obtained either by manufacturing from raw materials and or remanufacturing from returns and or procuring from external suppliers. The returns, product returned after use by the customer, are obtained after paying a return acquisition price. The return acquisition price is a fixed price paid based on the utilisation time of the return. The returns are dismantled into components: 1) which can be remanufactured; 2) which cannot be remanufactured (meant for disposal). The manufacturing, remanufacturing and assembly operations are integrated as a closed loop supply chain system and can be performed by any OEM. It appears from the literature review that the OEM-CLSC system, where components are remanufactured by OEM themselves, has not been considered in any of the previous research literature. This study addresses this research gap by considering a singleproduct multi-component remanufacturing of an OEM-CLSC problem. For addressing the research problem considered, a mathematical model is proposed to identify the optimal inventories and the production plan. Finally, an empirical analysis is carried out to determine the breakeven capacity for remanufacturing operation by suitably introducing a breakeven analysis model.

Total quality management and quality certification: effects in organisational performance
The purpose of this research is to analyse the relationship between TQM (total quality management) and quality certification in order to assess whether the TQM practices are indispensable assumptions that precede the quality certification in organisations, analysing likewise their impact in the organisational performance. The data were obtained through an online questionnaire, sent to small and medium-sized Portuguese companies, having being conducted the study based on responses received from 287 valid questionnaires. The findings indicate that certified companies have implemented the TQM practices; however certification does not translate into improved financial performance, leading only to operational improvement of organisations. Furthermore, the findings show that the implementation of TQM practices in organisations provides an improvement in their performance, both operational and financial.

Effect of process parameters in electric discharge machining of D2 steel and estimation of coefficient for predicting surface roughness
D2 steel is widely used in tool and die making industry for a variety of applications. Due to its superior mechanical properties, machining of this material by conventional methods is challenging. In the present work, the effect of process parameters in EDM of D2 steel on surface roughness is investigated and process parameters are optimised to achieve least surface roughness. It is found that Ra decreased by 6.92% for change in current from 3 A to 4.5 A, and then it is increased by 67.82% for further increase in current. The change in spark gap and pulse-on time resulted in the increase in surface roughness by 8.78% and 22%, respectively. At optimum levels of process parameters, the least surface roughness achieved is 2.14 µm. SEM image reveals that due to cyclic nature of melting and cooling of the metal in EDM, it resulted in creating network of cracks and formation of recast layers on the machined region.

Optimisation of abrasive water jet cutting process parameters for AA5083-H32 aluminium alloy using fuzzy TOPSIS method
This paper reports the identification of abrasive water jet process parameters for the cutting of AA5083-H32 aluminium alloy using the fuzzy TOPSIS method. Such identification requires a precise work in abrasive water jet (AWJ) cutting, considering that it determines the quality and performance of the process. In the present work, optimisation studies were carried out through using the fuzzy TOPSIS method for the determination of better optimal process parameters. The Taguchi full factorial method was used for the experimental design and the weighting of each output response was determined by a triangular fuzzy number method. The selection of optimal input parameters such as water jet pressure of 150 MPa, abrasive mesh size of #80 and jet impingement angle of 80° have been suggested for the precise work conducted in AWJ. A scanning electron microscope (SEM) was used for examining the AWJ cut surfaces at the optimal level of process parameter settings. An energy dispersive X-ray spectroscopy was employed to confirm the number of silicon particles embedded in the cut surfaces. The experimental result indicates the improvement in the quality of AWJ cutting by the fuzzy TOPSIS method through the identification of better optimal input process parameters.

A comparative study on the effectiveness of TiN, TiCN, and AlTiN coated carbide tools for dry micro-milling of aluminium, copper and brass at low spindle speed
The objective of this study is to investigate the effectiveness of titanium nitride (TiN), titanium carbo-nitride (TiCN), and aluminium titanium nitride (AlTiN) coatings on the tungsten carbide (WC) cutting tool for minimising the tool wear and tool breakage during dry micro-milling of aluminium, copper and brass at lower spindle speed. A comparative analysis on the machining speed, surface finish and tool wear were carried out for machining three materials using uncoated and coated carbide tools. The TiN coated tools were found to produce comparatively smoother surface finish in all three materials. The TiN coating was effective in the reduction of tool wear during micro-milling at higher feed rate and depth of cut, making it suitable for faster machining. Among three materials, brass produced superior surface finish, followed by aluminium and copper. Considering all the performance indicators, brass exhibited superior machinability than copper and aluminium in micro-milling for both coated and uncoated carbide tools.

Prediction model development for material removal rate in band sawing using dimensional analysis approach
Bandsawing is an accurate and fast process to cut various raw materials. Material removal rate (MRR) is the important parameter in bandsawing to judge its performance. To the best of knowledge of the author, almost none of the researchers has developed semi empirical model to study combined effect of process, material and machine parameters for bandsawing process. Hence, in the present research work, semi empirical model is developed for MRR using dimensional analysis approach. In this work, Taguchi's technique is used to conduct experiments. As per ANOVA, feed (58%), speed (17.44%) and top arm angle (13.55%) found as significant parameters. The model is formulated as a function of these parameters and validated calculating mean error (0.024), root mean square error (0.029) and percentage average error (6.63%). The model is verified by randomly substituting the values from experiment data set. The predicted results were found in close agreement with the experiment results.

Machining parameter optimisation for aviation aluminium-alloy thin-walled parts in high-speed milling
Aviation aluminium-alloy is widely used to manufacture the thin-wall parts in aviation area. For the low stiffness of thin-walled parts, high-speed milling provides an efficient approach for machining the aviation aluminium-alloy thin-walled parts with high quality and high efficiency, and the machining parameters directly affect the machining quality. Therefore, it is necessary to analyse the machining quality of aviation aluminium-alloy thin-walled parts under different machining parameters, based on which, the machining parameters are optimised correspondingly to reduce the surface roughness. Taking high-speed milling of typical 7075 aviation aluminium-alloy thin-walled parts as an example, the orthogonal experiment is adopted and then the range analysis is used for finding the optimal machining parameter combination. Experimental results show that the quality of workpiece is the best when spindle speed is 7,000 rpm, feed rate is 600 mm·min−1 and radial feed rate is 0.3 mm within the given investigation parameter range.

Contractor-furnished compaction testing: searching for correlations between potential alternatives to the nuclear density gauge in Missouri highway projects
The Missouri Department of Transportation's (MoDOT) past and present quality control and quality assurance programs for construction are examined. MoDOT's present quality management program along with a small number of grading projects has lowered the number of quality assurance (QA) soil compaction tests completed in the past two years. The department would like to rid itself of using the nuclear density gauges because of burdensome federal regulations, required training, security and licensing fees. Linear and multiple regression analysis was performed to see if a correlation between nuclear density gauge dry densities values and light weight deflectometer modulus values/clegg hammer clegg impact values exist. These relationships or lack thereof will determine the technology used by construction contractors to perform compaction quality control testing if MoDOT moves away from using nuclear density gauges for soil density verification.

Quality tools and techniques: an introspection and detailed classification
In the present scenario, manufacturing organisations leads to the development and innovation of accelerating product in worldwide competition for their quality, functionality and versatility. It defines the detailed specification and phase of development process of manufacturing products for achieving desired results. The important step to get ahead in this competition is devastating new products in order to create difference and meeting the customer requirements. Consequently, effective utilisation of a certain QT&T is highly reliant on adequate knowledge of tool and techniques so as to achieve anticipated results in terms of higher product quality and it ultimately helps to meet the requirements of global competition. So, the main purpose of this paper is to examine the usage, application and suitability of different quality tools and techniques (QT&T) in manufacturing organisations. Total 152 quality tools and techniques are identified and categorised into 16 groups based on their characteristics of applications, suitability and usages.

Inter-regional patterns of life span in Pakistan: a life table analysis
A set of life tables for seven geographic regions of Pakistan is presented. Data from Pakistan Demographic Survey - 2007 is used for the preparation of life tables. Life expectancy at birth for male is found 64.21 year in the province of Punjab, shows the highest one and 54 years for female found in the Baluchistan province which is least one. Further, urban areas of Pakistan have been seen on the forefront in terms of life expectancy in Pakistan. From the life table analysis a substantial sex differential could be discerned. The pattern of life expectancy in Baluchistan province is interesting to note, males are having higher life expectancy as compared to females.

Risk assessment of quality management system failure via analytic hierarchy process and the effects on organisational sustainability
The current dependency of industries on quality management for economic development shows the need of research into the sustainability of organisations. At present, studies on quality and organisational sustainability do not include quality management risk factors that could affect the sustainability of organisations. This study aims to address this gap by identifying the relevant risk factors, specifically proposing AHP to identify the major risks of non-compliance to ISO 9001-2008 requirements and evaluating the effects on sustainability in the organisational context. The six major risks identified in the application of the method are: management not committed to quality, inexistence of quality policy, no definition of responsibilities, authority and communication not well defined, inexistence of management review, product non-conformity process not effective, and customer related process not effective. The proposed method represents a source of motivation for firms to focus on the quality aspects of the business to improve organisational sustainability.

An ant colonial optimisation approach for no-wait permutation flow shop scheduling
This research aims to address the applications of variants of ant colony optimisation (ACO) approach to solve no-wait flow shop scheduling problem (NW-FSSP). The most suitable ACO algorithm out of basic algorithms has been selected and modified to achieve more purified results. The algorithm was coded in visual basic. The varied algorithm has been applied to the bench mark problems and results were compared with the results achieved previously by other researchers using different meta-heuristics. The research covers detailed steps carried out for application of basic ACO algorithms on bench mark problems, comparison of results achieved by application of basic ACO algorithms, selection of best out of basic algorithms, modification of selected basic algorithm and generation of varied ACO algorithm. The varied ACO algorithm gave reasonably good results for almost all the problems under consideration and was able to handle fairly large sized problems with far less computational time. Comparative analysis depicted that the proposed ACO algorithm performed better than genetic algorithm on large sized problems and better than Rajendran heuristic in almost all problems under considerations.

A conceptual framework of the relationship between total quality management, corporate social responsibility, innovation capability, and financial performance
Total quality management, corporate social responsibility, and innovation are considered as strategic orientations allowing the achievement of a sustainable competitive advantage. However, there is a gap in the literature regarding the study of the links between these three concepts in a single analysis, the majority of researchers do not gather the study of these three concepts at once, as they do not as well link them to the financial performance. The main purpose of this conceptual paper is to examine the relationship between quality, innovation, corporate social responsibility and financial performance. We are aiming to build a conceptual framework with a particular emphasis on the role that may be played by TQM practices in the development of the bidirectional link between corporate social responsibility-innovation capability, and its impact on the improvement of the firm's financial performance.

The level of quality teaching in private institutions: comparative study based on students' perceptions and expectations
Investigating and utilising the feedback, academic staff opinions and students' perceptions on teaching and learning will impact the quality of teaching and play an important role in expanding the reputation of institutions. One of the main purposes of this article is to highlight the importance of students' opinions in institutions through reviewing several aims and plans of educational bodies of Oman. In particular, the paper reviews two national studies on students' perceptions and expectations of the quality of teaching in private institutions: the methodologies of the two studies were discussed and the paper gave useful and simple comparisons between them. The paper determines ten common indicators between the studies - the results of the indicators are tested and compared and in general they are only average and approximately equal. The conclusions are clarified and the limitations of both studies are discussed in order to improve the future studies on quality teaching.

A genetic algorithm-based schedule optimisation of a job shop with parallel resources
This paper presents a genetic algorithm for a job shop scheduling problem with parallel machines. The objective is to minimise the makespan. After solving an example, the performance of the proposed algorithm was examined on a set of test problems. The computational test was performed with moderate benchmark instances given in the literature. Now, many industrial work centres are shifting their focus towards implementation of job shop scheduling model. Large throughput and just-in-time restrictions have augmented the requirement of additional parallel machines at various production stages. This generates the scope for research on optimising job shop problem with parallel machines. However, research on this topic is very limited as compared to other job shop problems. A genetic algorithm has been proposed in this research work which effectively finds the near optimal makespan schedules for the job shop problem with parallel machines. The algorithm was implemented on the modified benchmarks from the literature and the results were compared with heuristics methods available in literature for solving this problem. It is shown that the proposed GA performs reasonably well when compared to the other techniques under consideration.

Interactive rendering of light scattering in dust molecules using particle systems
In this paper introduces a technique for rendering effect of the volumetric lighting in dusty atmosphere using particle systems. Despite numerous techniques has been proposed for rendering these effects but still lack realism in the interactive applications. This technique is based on the sampling planes to compute radiance transport equation and use dynamic model of the dust. The scattering of light is computed by using fragment shaders and 2D texture by making use of the graphics hardware, while using ParticleEngine technique generates the dust. The technique is efficient and accurate to mimic a realistic scenes have effect such as scattering of light in dust molecules. In addition to, volumetric shadows are created resulting of density and size particles within participating media. Therefore, scattering of light is generated in presence dusty media that lead to provide visual clue closer to realistic.

Computer-vision-based bare-hand augmented reality interface for controlling an AR object
In this paper, we design and implement a vision-based bare-hand interface which manipulates a virtual object in augmented reality environments in a natural fashion. For years in the researches of developing vision-based human computer interaction, a lot of works have been conducted with the augmented reality technology. Various vision-based interfaces invented are developed by utilising the movement of eyes, hand and body gesture to interact between human and substances to solve the problems such as portability and efficiency of the extensively used interfaces like mouse and keyboard. Many of previous studies for the augmented reality technology are animatedly conducted to make possible to interact with user interface through the information from the image seen with human eyes when people wear the glasses. In this work, we developed a vision-based AR interface which can interact with and control the augmented virtual objects such as architectural or engineering 3D models by recognising simple hand gesture of both human bare-hands in augment reality.

Efficiency and stability of EN-ReliefF, a new method for feature selection
One of the most advanced forms of industrial maintenance is predictive maintenance. Indeed, the present analysis of the behaviour of a material helps to predict future behaviour. So as the diagnosis of faults in rotating machines is an important subject in order to increase their productivity and reliability, the choice of features to be used for classification and diagnosis constitutes a crucial point. The use of all the possible features will cause an increase in the computational cost and it will even lead to the increase of the classification error because of the existence of redundant and non-significant features. In this context, we are interested in presenting different methods of feature selection and proposing a new approach that tends to select the best features among existing ones and perform the classification-identification using the selected features. A study of the proposed method stability is also provided.

ECG signal compression using filter bank based on Hermite polynomial
The electrocardiogram (ECG) signal compression is necessary for storage and transmission. In this paper we propose a new filter bank based on Hermite function for the effective compression of the ECG signal. The Hermite function is used to derive the low pass filter taps and compression is carried out using discrete wavelet transform (DWT). The compression scheme is implemented and performance evaluation is presented based on the standard compression indices like compression ratio (CR), percent root mean square difference (PRD) and cross correlation coefficient (CCC). The retrieval of the dominant morphological features of the ECG waveform like P-QRS-T complex upon reconstruction are also verified. The results are presented using the MIT-BIH and CSE-DS-5 databases. The results reflect that the quality of the reconstructed signal is excellent with minimum loss of the diagnostically important features.

Comparative study of design and analysis of gripper systems for bore well rescue operation
Rescuing the trapped child from the bore well is always a challenging task for the rescue operation team. In recent years, rescue robots are used to save the child in short duration. In this work, we designed and developed three types of robotic arms, rectangular, square and cylindrical based mechanical gripper system to rescue the child from the bore well safely. Structural and performance analysis were carried out to check the effectiveness of the robotic arms. It was observed that rectangular and square based robotic arm have high displacement during gripping operation. A high displacement of the arm may lead to cause injuries to the child. We then explored cylindrical robotic arm based gripper system for improving the effectiveness of the robotic arm. Our experimental results show that cylindrical robotic arm outperforms the rectangular and square robotic arm. The robotic arms were practically tested in real rescue operations.

Performance metrics on ultra low power polyphase decimation filter using carbon nanotube field effect transistor technology
Low power consumption and abatement in area are the most pre-eminent criteria to scheme the digital signal processor. Multi-rate signal processing studies digital signal processing systems which include conversion. Filters are the substantial building blocks of DSP. The polyphase filters are the momentous component in crafting of various filter structures. Polyphase structure employs FIR filter that terminates to very efficacious implementation. The polyphase decimation filter is generally built with multipliers, parallel in serial out shift register, serial in parallel out shift register, ripple carry adder, carry lookahead adder and parallel in parallel out shift registers as a delay element. To accomplish the desired results in performance parameters of the multiplier, a potent adder is proposed and embodied in the multiplier. The carbon nanotube field effect transistor (CNTFET) is an ameliorating new device that may trample some of the restraints of a silicon-based MOSFET. The circuits are designed in 32 nm CMOS and CNTFET technology in Synopsys HSpice. Performance parameters such as power, delay and power delay product are assayed and compared in both the technologies.

Spreadsheet-based neural networks modelling and simulation for training and predicting inverse kinematics of robot arm
This paper is proposed to solve the inverse kinematic (IK) problem of two-degree-of-freedom planar robot arm using neural networks (NN). Several NN model designs of distinct hidden neurons based on the sum of square error function of joint angle are developed and trained with generalised reduced gradient algorithm. The paper is also intended to demonstrate the modelling process of feed-forward NN topology in spreadsheet environment. The spreadsheet functions as INDEX, SUMPRODUCT, EXP, and SUMSQ; the utilities as name manager, data validation, data table, ActiveX controls, answer report, and charts; and the add-in Solver are utilised to develop the models. With the input parameters of link lengths and end-effector position and orientation, two models with the structures 5-12-1 and 5-10-1 are discovered best-capable in predicting first and second joint angles respectively. This NN-based IK technique contributes significantly to the optimal motion control of robot arm for quality processing and assembly tasks.

Implementation of biologically motivated optimisation approach for tumour categorisation
Tumour prediction and classification is regarded as a complex task that needs attention. Moreover, medical experts lack expertise in this section. Hence, an intelligent clinical system model is the time of the hour. Recently, biologically motivated techniques are emerging to be an efficient computing method to solve imprecise and complex problems. Nature forms an immense source of motivation in finding solutions to sophisticated problems IT sector since it is highly robust and dynamic. The result obtained is highly optimised and balanced solution. This is the basic idea of such nature motivated techniques. In our research, we have analysed and implemented some important bio-inspired optimisation techniques to categorise different kinds of tumour. Multilayer perceptron is the classifier used in the process. We have later evaluated our results with some critical metrics like RMSE, Kappa coefficient, accuracy and many others to determine the effectiveness of our system model developed. It is observed that using bio-inspired computation approach enhances the efficiency of tumour classification. The results are depicted in this paper.

Mathematical modelling for fatigue life prediction of a symmetrical 65Si7 leaf spring
Leaf spring is a suspension component which is designed to sustain a required fatigue life before its failure or permanent set. The fatigue life of a leaf spring depends upon the various factors like geometry, design, material, processing, fatigue strength reduction and some uncontrollable factors. To determine the effect of variation of individual factor on the fatigue life is always a challenging task as the experimental procedure is time consuming and costly. Also no such attempt has been made to predict this effect. The work presented in this paper depicts the effect of variation of an individual factor on the fatigue life of a leaf spring. A computer program has been written in FORTRAN, for determination of the fatigue life of a light commercial vehicle, has been validated experimentally. Two processing factors, five strength reduction factors, one design factor, one material factor and two geometry factors have been considered for investigation in this work. The effect of variation of one factor at time on the fatigue life of the leaf spring has been determined and modelled using statistical tool NCSS. The regression model has been depicted and validated analytically and experimentally.

Synthesising of reactionless flexible mechanisms for space applications
Often, one achieves the dynamic balancing condition by resorting to counter-devices approach, however, by doing this, one adds extra weight and therefore the inertia are increased inside the whole system, which is not cost-effective when the system is sent into space and later used in space. In this study, it is suggested one is able to achieve the reactionless condition through combining the self-balanced system. For example, the dynamic balancing condition can be realised via the reconfiguration concept. Extra counter-mass is not employed but through reconfiguring the whole structure, in this way, the system will not get to be heavy and therefore, reduce the energy costs and make the system more applicable and flexible for space applications. Based on this concept, first and foremost, one needs to balance a single component through the reconfiguration approach (i.e., decomposition process) and after that integrate the above balanced components to build the entire system (i.e., integration process). Finally, with the mechanical reconfiguration, the control laws governing the operation of the mechanism also need to be changed, so as to make whole systems more flexible when they are used in space.

Saturn ice ring exploration network mission platform
SIREN is a proposed mission concept that would demonstrate the use of small spacecraft at the rings of Saturn to quantify the ring environment and study the composition and dynamics of the ring particles. Many existing small spacecraft technologies are leveraged to make this mission possible. SIREN consists of several daughtercraft deployed from and networked to a mothership in hover-orbit over the ring plane. The mothership is based on the Saturn Ring Observer spacecraft (Nicholson et al., 2010). This paper details the mission objectives, the science requirements, the overall mission design and the science payloads of the daughtercraft.

Experimental investigation of optimal positional relation between RF antenna and magnetic cusp for thrust performance of RF plasma thruster
The electrodeless radio-frequency (RF) plasma thruster, which avoids the risk of electrode failure, is likely to be a highly appealing electric propulsion system. Although this type of thruster has achieved notable thrust performance in high-power conditions of several hundred kilowatts, it falls short in low-power conditions of several kilowatts. To improve the thrust performance at low power, an RF plasma thruster has been proposed that involves a magnetic cusp. This study aimed to reveal the optimal positional relation between the RF antenna and the magnetic cusp. The performance of an RF plasma thruster with a magnetic cusp was characterised experimentally using a torsion-pendulum thrust stand for six positional relations between the RF antenna and the magnetic cusp. The maximum thrust performance (4.4 mN, 443 s at 1,000 W and 1.2 mg/s of Ar) was obtained with the RF antenna located downstream of the magnetic cusp. The proposed optimised positional relationship of the thruster components is with the RF antenna located downstream of the magnetic cusp and close to the thruster exit.

Solar array degradation on geostationary communications satellites: the quantification of annual degradation and degradation over solar proton events
Solar array telemetry from eleven geostationary communication satellites, launched between 1990 and 1998, was analysed and used to quantify solar array degradation of gallium arsenide (GaAs) and silicon (Si) solar cells. GaAs cells had an average annual percent degradation ranging between 0.44% and 1.03%, while Si cells had an average annual percent degradation ranging between 0.71% and 1.69%. The decrease in Si cells degradation rates, based on telemetry of Isc and Voc levels, ~1,210% over a ten years mission suggests an update to the Si solar cell degradation design rule of thumb 25% over a ten year mission. Degradation during solar particle events (SPEs) of 10 MeV proton flux > 10,000 pfu was analysed and used to create an initial functional relationship between degradation experienced over SPEs and the accumulated fluence of these SPEs.

Design of a variable ISP space engine
An existing propulsion system is modified to obtain variable ISP capabilities that could be used to modulate thrust. This created an engine that is able to use low thrust and high ISP setting for interplanetary travel and use high thrust and low ISP setting for attitude control and compensating for atmospheric drag. A cold gas thruster (CGT) was created by using the body of a charge exchange thruster (CXT) effectively producing a dual mode CXT with CGT capabilities. The dual mode CXT was capable of a maximum thrust of 94 µN in electric engine mode. Experimentations showed that using Argon, the dual mode CXT in CGT mode produced a thrust of 4 mN, a 43-fold improvement. A convergent divergent nozzle was designed and validated via ANSYS Fluent for use with the dual mode CXT, it showed a 16-fold improvement in CGT performance at low mass flow rates < 50 sccm.