Subscribe: Inderscience
http://www.inderscience.com/rss/rss.php
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
algorithm  approach  based  capital  data  intellectual capital  new  paper  performance  proposed  research  results  scheme 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Inderscience

Inderscience



This New Articles Channel contents the latest articles published in Inderscience's distinguished academic, scientific and professional journals.



 



A cache coherence scheme for developing mobile cooperative applications
The advance of mobile computing and network technology allows the integration of mobile devices for cooperative applications. The available techniques for developing cooperative applications were elaborated mainly to strongly coupled environments where disconnection is not an issue. The development of synchronous cooperative applications presents challenges since disconnection can disrupt the collaborative work and frustrate users' expectations. There are cache coherence schemes for maintaining data coherence that provide a reasonable performance of wireless applications with mobile devices; however, those schemes do not consider the requirements of processing and communication of synchronous cooperative systems. In this paper we propose a cache coherence scheme for the development of mobile synchronous cooperative work applications that is based on periodic notifications of updates and provides aspects of awareness information. The evaluation of the proposed scheme was conducted through controlled experiments. For the experiments, a case study involving a cooperative game application was developed. The results indicate that the scheme ensures cache coherence, concurrency control and provides awareness information for mobile cooperative work.



Fast self-repairing region growing surface reconstruction algorithm for unorganised point cloud data
This paper proposes a fast self-repairing projection-based surface reconstruction algorithm for both closed-form and free-form data sets in region-growing principle, which generates triangles between reference point and its neighbours according to their status and positions on the tangent plane. In this work, the whole framework of meshing and related concepts is outlined first, then the triangulating procedure is summarised to seven different cases according to the status and position of points, and the corresponding triangulating details are presented. To eliminate the potential missing triangles derived from the triangulating procedure, a data structure Single Edge Index Table (SEIT) is developed to track all the boundaries of the generated triangular mesh and dynamically updated along with the triangle forming. After the triangulation, a quick depth traverse on SEIT is carried out to detect all the local holes in reconstructed mesh, followed by a hole filling procedure to wipe off the holes within predefined size. Experiments validate that proposed algorithm reconstructs better mesh for both closed-form and free-form point clouds, and achieves high efficiency.



Important approach to 3D reconstruction of tridimensional objects based on multiple plan images
In the present paper, we will focus on a new approach for efficient and reliable tridimensional reconstruction of objects from flat images. Our approach allows the realisation of tridimensional reconstruction without passing through the calibration and self-calibration phase of the camera, but based on the estimation of the fundamental matrix and the homography at infinity to have the projective, affine and Euclidean projection. Our method is based firstly on a very important step in the 3D reconstruction that is the detection of interest points using the Harris detector to have a sufficient number of matches distributed on the images, these matches are used to estimate the 3D points, and secondly to estimate the projection matrices that are made from different existing relationships between the three types of tridimensional reconstruction (projective reconstruction, affine reconstruction, Euclidean reconstruction). Experimental results prove that this method is practical and gives satisfying results without going through the calibration step.



Multimodal information perception based active human-computer interaction
Human-computer interaction (HCI) has great potential for applications in many fields. A HCI method has been proposed based on audio-visual perception and feedback. The interactive target can be determined in different interactive models including gazing, hand pointing, eye-hand coordination, speech and fusion of these audio-visual modalities in a non-contact and non-wearable ways. Interaction in audio-visual integration is feedback timely to user with an immersive experience. A mode analysis based fusion strategy is proposed to fuse the interactive responses from audio-visual modalities especially when there are deviations from the modalities at the same time. The developed approach can be applied with a better performance even in single modality with multiple interactive users in the scenario. Besides, the diversity of interactive habits among multiple users is considered in an ordinary hardware from a crowded scene. Experiments have highlighted that the proposed approach has superior HCI performance by comparisons.



Neural networks approach for IR-heating and deformation of ABS in thermoforming
This study focuses on the interaction between an IR-heating source and material to be thermoformed, with the aim of providing an accurate description of the polymer behaviour under the conjugated effect of stress and temperature. The possibility to model material behaviour and develop a reliable and simple system to define thermoforming strategy is of great interest to improve industrial production, reducing manufacturing costs. In this investigation, both tensile tests and temperature measurements were performed on ABS subjected to IR radiation. Different values of distance polymer-lamp, sample thickness, and test rate were considered. The experimental trends were modelled by artificial neural network. A good generalisation capability and high flexibility were found for the proposed neural network solution, in accordance with the experimental results.



Systematic review of aspect-oriented formal method
Aspect-oriented modelling is used in a growing number of business and software development projects. However, the plethora of unstandardised aspect-oriented approaches is preventing companies from adapting aspect-oriented software development methods into a formal method. Both aspect-orientation and formal methods have evolved separately and little has been done to investigate the benefits of the possible integration of these two fields. Therefore, in this paper we aim to present a survey of the approaches used to date in the application of the aspect-oriented paradigm to formal methods of specification. We also systematically review some of the techniques and approaches that combine both concepts together and discuss the problems and issues that arise in relation to the combination of formal methods and aspect orientation. In addition, we identify the open research questions to highlight potential areas of future research.



An integrated fuzzy MCDM approach for risk evaluation of new product in a pipe industry
New Product Development (NPD) process is recognised as one important competitive advantage for most companies which include high risk and uncertainty. Hence, this study aims at accelerating new product introduction and improving the quality of decision-making in NPD process under risks and uncertain conditions. This paper suggests an integrated framework based on the Fuzzy Analytic Hierarchy Process (AHP) and the Fuzzy Technique for Order Performance by Similarity to Ideal Solution (TOPSIS), to evaluate new products in a fuzzy environment where the vagueness and subjectivity are handled with linguistic values parameterised by triangular fuzzy numbers. The Fuzzy AHP is implemented to calculate weights of the risk criteria, and the Fuzzy TOPSIS method is applied to obtain a final ranking. Finally, the proposed approach is implemented for NPD evaluation of a pipe and fitting industry. The results reveal the supply risks have the highest effect on risk evaluation of new products.



Who has the development process in his hands?
In literature, many contributions discuss the importance of 'Process Owners' in business processes. This issue is highly relevant to product development, this often being the most strategically important, but also the most highly dispersed and knowledge-intensive process found in companies. Despite the relevance attributed to this role, few authors discuss the problem of identifying to whom the responsibility of Process Ownership should be given. The paper provides an approach, inspired by multiple-issue actor analysis methodologies, which supports a rigorous identification of the Process Owner by evaluating candidate actors' consistency with process goals, their influence on the process, and their negotiating power. Then the paper presents the application of this approach to the product development process in an Italian aerospace company.



A novel AHP-TOPSIS integrated method for case-based retrieval in mechanical product design
Case-based Design (CBD) is an effective method in product configuration design, for which a successful retrieval of the most similar case is critical. However, conventional retrieval strategies fail to consider the interrelations between design requirements. As a result, they often arrive at a case that is not the most desirable or derive multiple cases of a close similarity level that make case-matching difficult. Based on the fundamental concepts of Analytic Hierarchy Process (AHP) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), this research proposes an improved retrieval strategy combining both AHP and TOPSIS for mechanical product design. This method ensures the validity of the retrieval result and solves the problem of multiple similar cases. This paper further elaborates on the product configuration model, the matching algorithms and the retrieval process. At last, a case study of wheeled agricultural tractor design is applied to validate the method.



Contextualised co-creation: innovating with individual external contributors throughout the product life cycle
A co-creation strategy for product innovation ought to be customised for the specific context in which it is to be implemented. Despite the obvious common-sense appeal of this idea, the context-dependent character of co-creation has not yet been clearly recognised and analysed in the pertinent literature. We address this intellectual gap by positing the concept of 'contextualised co-creation'. By focusing on the evolving opportunities for product innovation, and related extant conditions and potential risks, we see the early and the latter stages of the product life cycle as distinct contexts for co-creation. For each respective context our concept suggests what type of actors may be involved as co-creators in product innovation projects, which type of co-creation may be appropriate, and how a suitable co-creative setting may be developed. Our contextualised co-creation concept may be applied by companies as a rubric for strategic decision-making related to collaborative innovation with individual external contributors in product development projects.



Design and FPGA implementation of chaotic interleaver for IDMA system
Interleaver plays a very important role in a digital communication system. It is often used to improve the performance of forward error correcting codes. On the other side, interleaver can be used in a multiusers transmission especially in the Interleave Division Multiple Access (IDMA) technique. Indeed, in IDMA systems users are distinguished only by interleavers. Each user is identified by a specific interleaver. We will propose a method to generate chaotic interleaver based on the generation of sequences having a chaotic behaviour using some chaotic maps such as logistic function and Henon map function. In this paper, we will study the performance of chaotic interleavers with correlation analysis for IDMA system. A comparison between the proposed interleavers and other interleavers in terms of FPGA (Field Programmable Gate Array) resources and maximum operating frequency is presented. Simulations result shows that the chaotic interleaver designed is simple to generate and outperforms other interleavers to require fewest devices and best timing behaviour.



An incorporated constrained differential evolution algorithm and its application on parameter identification of machine joint surfaces
In this paper, an incorporated constrained differential evolution algorithm based on Invasive Weed Optimisation (IWCDE) had been proposed. The IWCDE algorithm is based on the standard differential evolution algorithm and incorporates the Invasive Weed Optimisation Algorithm. The IWCDE algorithm proposed the new mutation strategy and the strategy of 'Infeasible individual evolution'. The proposed algorithm was compared with several other evolutionary algorithms, the results showed that the proposed algorithm could overcome the premature convergence efficiently and had better global convergence and robustness. Finally, the paper established the optimisation model of parameter identification on machine joint surfaces, solved the problem by the IWCDE algorithm.



A hybrid algorithm for mining local outliers in categorical data
Outlier detection is an important task in data mining. Many approaches have been developed to detect outliers. However, most researches focus on global outlier detection. In many situations, the local outlier detection is more valuable than the global outlier detection. In this paper, the existing methods for outlier detection are discussed firstly, and then the definition of local outlier and some formulas are given. Also a hybrid algorithm for mining local outlier is proposed which is based on clustering algorithm and standard deviation in statistics. By calculating the standard deviation of a cluster and local outlier factor of an object in the cluster, we can identify that the clusters with higher standard deviation may have outliers, and the objects with higher local outlier factor can be recognised as outliers. Experimental results on real datasets show that the proposed algorithm is correct and effective for mining local outliers.



Exploring factors affecting the adoption of users' adoption intention: an integration of information intervention and cognition of internet logistic information platform
This study develops and empirically tests a user adoption model of internet logistic information platform. From the perspective of information intervention, through questionnaire of logistics enterprises, influence mechanism that different types of information intervention on user adoption intention of internet logistic information platform have been systematically studied. In particular, this study is based on systematic definition and measurement of four independent information intervention variables (i.e., news propaganda, platform training, platform open source, and information input) and three intermediate variables (i.e., strategic value perception, operational value perception and implementation cost perception). The results indicate that the impacts of different types of the information intervention on the logistics enterprises in the adoption intention of logistics information platform have significant differences. Furthermore, information input has a much stronger impact than platform training and platform open source, news propaganda has the minimum impact.



Optimisation and fabrication by 3D printing of a ceiling antenna for communication applications
This paper presents a new ceiling antenna optimised by applying differential evolution (DE) and fabricated by 3D printing. The frequency band of the antenna ranges from 0.75 GHz to 3 GHz. The geometric structure refers to volcano smoke antenna, which contains a teardrop as the radiating element and a curved ground plane. Curves in structure are determined by cubic spline interpolation function. The antenna design problem is converted into a constrained optimisation problem (COP), which is solved by DE next. Some optimised antennas are found, and one of them is presented in this paper. The measured result basically meets the antenna requirement, which means ceiling antenna proposed in this paper could work for 2G, 3G and 4G LTE bands and also for WiFi frequency bands theoretically. For such this complex-shaped three-dimensional antenna, we choose 3D printing to fabricate. The measured result almost matches the simulation.



Heterogeneous multi-subswarm particle swarm optimisation for numerical and parameter estimation of PMSM
A heterogeneous multi-subswarm coevolution particle swarm optimisation (HMSCPSO) is proposed for numerical optimisation and parameters identification of PMSM. To improve the algorithm's dynamic optimal performance, the HMSCPSO consists of one adaptive subswarm and several basic subswarms. During the iteration, the best individual in basic subswarm and adaptive subswarm are selected as candidate to construct the elite subswarm. Heterogeneous search strategy was adopted in basic subswarm and adaptive subswarm. The migration scheme is employed for the information exchange between subswarms. The adaptive inertia weight strategy can maintain a balance between exploration and exploitation to ensure the algorithm converges to stable point. To accelerate the convergence rate, immune clonal selection operator with wavelet mutation is applied to elite subswarm. The performance of the proposed algorithm is extensively evaluated on suite of numerical optimisation functions. The results demonstrate good performance of the HMSCPSO in solving numerical problems when compared with others recent variants PSO. The performance of HMSCPSO is further evaluated by its application to the parameters identification of PMSM. The experimental results show that the HMSCPSO can simultaneously identify stator resistance, dq axis inductances and the permanent magnet flux accurately.



Handover management with call admission control in integrated femtocell-macrocell network
For achieving high data rate and better indoor coverage and to fulfil high capacity demand, low power-low cost femtocell network is very good option. For successful deployment of femtocell, smooth integration of femtocell network in macrocell network and seamless communication between macrocell and femtocell network is very important. Conventional handoff algorithms used in macrocells need some modifications to satisfy handover management in integrated macrocell femtocell network. In this paper, we have proposed a new hybrid handover approach with call admission control policy which takes care of seamless communication between integrated femtocell-macrocell networks, effective utilisation of femtocells and avoids unnecessary handovers.



PAPR reduction in OFDM system using hybrid PTS-RCMN scheme
Orthogonal Frequency Division Multiplexing (OFDM) is a widely accepted multi-carrier transmission scheme which is used for high data rate applications like Long-Term Evolution (LTE). It offers higher data transmission rate by efficient bandwidth usage. OFDM offers greater resistance and immunity towards inter-symbol interference (ISI) and Inter-Carrier Interference (ICI). However the major drawback associated with OFDM is high Peak to Average Power Ratio (PAPR) which forces high power amplifier (HPA) to work in the non-linear region resulting in harmonic distortions in the output. Partial transmit sequence (PTS) is a widely employed and tested method for PAPR reduction in 4G systems. In this paper we have proposed a novel scheme which uses a combination of PTS and Reduced Complexity Maximum Norm (RCMN) method to improve PAPR reduction performance of PTS system. This scheme has effectively reduced PAPR as compared to conventional PTS technique with insignificant increase in computational complexity.



Adapting radio resources in multicarrier cognitive radio using discrete firefly approach
The user resource allocation has attracted research attention in the context of the Cognitive Radio (CR) paradigm. Aiming at fully exploiting the frequency band unused by the primary users, it enables the secondary users to tune their transmission parameters and communicate within this band with a good Quality of Service (QoS). This paper targets the issue of radio resource adaptation according to the priority and the needs of the active users, the channel state and the availability of the frequency, in multicarrier transmission. The adaptation of such resources has been previously investigated and Particle Swarm Optimisation (PSO) and Cross Entropy (CE) approaches were shown to outperform their counterparts in terms of the convergence rate and the quality of the solution. Motivated by the great promises held by the newly proposed firefly approach, we have adapted its application as a multi-objective approach to optimise the communication quality of secondary users in a multicarrier system. The performance superiority of the proposed approach over PSO and CE techniques is assessed in terms of convergence speed, quality of solution and stability.



Research on visual background extractor to identify the vehicles based on edge similarity
By combining visual background extractor with Canny operator, a new approach to identify vehicles in a complex traffic environment is proposed. Firstly, the foreground object is extracted by ViBe algorithm and background difference algorithm, then the 'ghost' is removed by means of edge similarity. Next, the complete moving objects can be obtained by using morphological processing for the foreground object, which can be used to detect vehicles by combining with motion analysis. Experiments showed that the whole region of the vehicle object in a complex traffic environment can be extracted exactly and effectively by using this method. In addition, problems due ghosts and the variation of background can be well handled with low computation complexity, which can fulfil the needs of real-time operation.



Medical images segmentation based on improved three-dimensional pulse coupled neural network
Pulse coupled neural network (PCNN) is the third-generation model of artificial networks, which is based on the construction of the cat visual principle. When processing the image, it has a unique advantage, and PCNN is widely used in various fields, especially in the aspect of image segmentation, image fusion, and so on. However, the traditional PCNN model has a lot of problems, such as multi-parameters, parameter setting. Moreover, exponential decay mechanism will sometimes bring certain difficulty for image segmentation, etc. To solve these problems, a simplified and improved 3D-PCNN model is proposed in this paper, through which the whole 3D brain image segmentation is achieved. The experimental results show that, the 3D-PCNN algorithm reduced the segmentation time and improved the efficiency of segmentation when compared with the traditional 2D-PCNN model, the traditional 3D-PCNN algorithm and the 3D Otsu algorithm.



Managing workflows on top of a cloud computing orchestrator for using heterogeneous environments on e-Science
Scientific workflows (SWFs) are widely used to model processes in e-Science. SWFs are executed by means of workflow management systems (WMSs), which orchestrate the workload on top of computing infrastructures. The advent of cloud computing infrastructures has opened the door of using on-demand infrastructures to complement or even replace local infrastructures. However, new issues have arisen, such as the integration of hybrid resources or the compromise between infrastructure reutilisation and elasticity. In this article, we present an ad hoc solution for managing workflows exploiting the capabilities of cloud orchestrators to deploy resources on demand according to the workload and to combine heterogeneous cloud providers (such as on-premise clouds and public clouds) and traditional infrastructures (clusters) to minimise costs and response time. The work does not propose yet another WMS but demonstrates the benefits of the integration of cloud orchestration when running complex workflows. The article shows several configuration experiments from a realistic comparative genomics workflow called Orthosearch, to migrate memory-intensive workload to public infrastructures while keeping other blocks of the experiment running locally. The article computes running time and cost suggesting best practices.



Clustering-based uncertain QoS prediction of web services via collaborative filtering
Although collaborative filtering (CF) has been widely applied for QoS-aware web service recommendation, most of these approaches mainly focus on certain QoS prediction. However, they failed to take the natural characteristic of web services with QoS uncertainty into account in service-oriented web applications. To solve the problem, this paper proposes a novel approach for uncertain QoS prediction via collaborative filtering and service clustering. We first establish uncertain QoS model for a service user, where each service is formalised as a QoS matrix. To mine the similar neighbourhood users for an active user, we then extend the Euclidean distance to calculate the similarity between two uncertain QoS models. Finally, we present two kinds of QoS prediction strategies based on collaborative filtering and clustering, called U-Rec and UC-Rec. Extensive experiments have been carried on 1.5 million real-world uncertain QoS transaction logs of web services. The experimental results validate the effectiveness of our proposed approach.



Skyline service selection approach based on QoS prediction
The internet currently hosts a large number of web services with highly volatile quality of service (QoS), which makes it difficult for users to quickly access highly reliable online services. Hence, the selection of the optimal service composition based on fast and reliable QoS has emerged as a challenging and popular problem in the field of service computing. In this paper, we propose a service selection approach based on QoS prediction. We consider historical QoS information as time series and predict QoS values using the autoregressive integrated moving average model, which can provide more accurate QoS attribute values. We then calculate the uncertainty in the prediction results using an improved coefficient of variation to prune redundant services. In order to downsize the search space, we employ Skyline computing to prune redundant services and perform Skyline service selection using 0-1 mixed-integer programming. Experimental results based on real-world dataset showed that our approach yields satisfactory performance in terms of reliability and efficiency.



An overall approach to achieve load balancing for Hadoop Distributed File System
Hadoop Distributed File System (HDFS) is a popular cloud storage system that can scale up easily to meet the increasing demand for more storage capacity. In HDFS, files are divided into fixed-size blocks, which are then replicated and randomly stored on many DataNodes to prevent data loss. It can be easily observed that the random nature of the default block placement strategy may lead to a load imbalance state among the DataNodes. Although HDFS has a built-in utility to achieve load balancing, it comes at the cost of a reduced system performance owing to moving blocks around. In this paper, we take a holistic approach to achieve load balancing by considering all situations that may influence the load-balancing state. We designed a new role named BalanceNode to help in matching heavy-loaded and light-loaded DataNodes, so those light-loaded nodes can share part of the load from heavy-loaded ones. We also designed a better block placement strategy to make the storage load as balanced as possible in the first place. The simulation results show that our approach can achieve better load-balancing state than with existing algorithms.



Knowledge management in an innovative virtual company
Through a longitudinal case study, we examine the structure, characteristics, and team dynamics that motivate innovation within a virtual company. By determining the extent to which the knowledge management and organisational learning literatures apply to a virtual company, Flare Solutions Limited, we are able to identify unique characteristics that motivate creativity within the partners of the company. Utilising a case study and an action research approach, we found that this company integrates an interesting mix of mechanistic and organic elements within its structure to operate successfully as a learning organisation within a virtual environment. As well, the company has concretely identified and inventoried its knowledge, and other aspects of intellectual capital, permitting a clearer understanding of its operations. Our work offers a contribution as to how virtual companies can generate an atmosphere of creativity and innovation to create, preserve, and disseminate knowledge within a virtual company.



Total quality management and job satisfaction among the bank employees
Total quality management (TQM) is one of the popular practices among management practitioners for the last two decades. Most corporations/firms, nowadays, consider TQM as a source of competitive advantage. This paper aims to investigate the relationship between TQM elements and job satisfaction. This study is a quantitative research by nature. A questionnaire was developed from the previous studies and was used in this research. Regression was adopted to test hypotheses. The results of the study reveal that three independent variables, namely teamwork, organisational culture and reward and recognition have positive and significant relationships with job satisfaction. On the other hand, no significant relationship between organisational trust and job satisfaction was evidenced. The findings of this study may contribute significantly to the development of new knowledge, and help understand how TQM elements work in the banking sector of Bangladesh.



Determinants of intellectual capital disclosure - Indian companies
The objective of the study is to examine the association if any, between the intellectual capital disclosure made by Indian corporates and determinant factors like leverage, ownership structure and independence of the board, sector and size of these firms. The firms taken for study are Indian companies indexed in Nifty 50, the main index of National Stock Exchange (NSE) of India. The study discloses that service sector companies have a higher level of disclosure when compared to manufacturing or industrial sector companies. The data reveals that company size is positively associated with disclosure level. Data further reveals that when heterogeneity of firms is taken into account along with time factor, companies with higher proportion of independent directors disclosed more, government-owned firms disclosed less as compared to other firms.



Introduction of new intellectual capital disclosure framework in Indonesia
Intellectual capital has gained increasing attention, and its importance has been acknowledged widely including in Indonesia. Despite the importance of intellectual capital, several researchers emphasise the limitation of such studies in non-developed countries. Furthermore, the majority of prior studies on intellectual capital disclosures employ content analysis despite its weaknesses. Therefore, this research constructed an Indonesian intellectual capital disclosure framework, a framework based on data of Indonesia that may overcome content analysis weaknesses. Furthermore, this research also offers a new joint theoretical framework, intellectual capital theoretical framework, which may be useful for future intellectual capital disclosure research. This research contributes to limited intellectual capital disclosure research in Indonesia and provides valuable insights into the practice in Indonesia. This research also contributes to the development and introduction of intellectual capital theoretical framework and Indonesian intellectual capital disclosure framework.



Evaluating intellectual capital and its impact on financial performance: empirical evidence from Indian electricity, mining and asset financing service sectors
This paper empirically examines the impact of the intellectual capital (IC) efficiency on the financial performance of the Indian electricity, mining and asset financing service sectors. The model value-added intellectual coefficient (VAIC™) by Pulic used as a methodology to evaluate the value-added efficiency of the selected 60 companies from the Bombay Stock Exchange ranging from 2006 to 2015 on the basis of market capitalisation. The major findings of correlation analysis suggested that IC has a positive relationship with profitability and an inverse relationship with productivity. Consequently, IC has partially positive impact on financial performance and may become a path for improvement in future. The main evidence revealed that human capital has the strongest positive effect on firm value. The current empirical evidence extends concrete step towards the profound understanding the role of IC in improving performance ability of the firms and it helps the organisations to create and maintain an emulous advantage in nascent economies.



The moderating role of corporate governance on the relationship between intellectual capital efficiency and firm's performance: evidence from Saudi Arabia
This study examined the moderating role of corporate governance on the interaction between intellectual capital efficiency and financial, operational and market performance. The study used a pooled data of 171 firms listed on the Saudi Stock Exchange during the period from 2012 to 2014. Multiple regression approach was incorporated under fixed-effect method. The findings revealed that the inclusion of corporate governance as a moderating variable has influenced positively the relationship between intellectual capital components and financial, operational and market performance. In addition, only capital employed efficiency positively affects financial performance, while structural capital efficiency and capital employed efficiency positively affect the operational performance. As for market performance, it was affected positively by all the Intellectual capital components. Further, the findings showed that the larger firms, the higher level of human capital efficiency, and smaller firms, the higher level of structural capital and capital employed efficiency.



Identification of protein complexes in protein-protein interaction networks by core-attachment approach incorporating gene expression profile
Due to the advancement in Proteomic technologies, bulk data of protein-protein interactions (PPI) are available which give researchers in bioinformatics the opportunity to explore and understand biological properties and structure from a networking perspective. Identification of protein complexes is a challenge that has emerged as an attraction to researchers particularly in computational biology. Various computational approaches were developed to identify protein complexes in PPI networks. In this paper, we give a new method based on the core-attachment approach with incorporation of gene expression data known as core-attachment with gene (CAG) expression to identify protein complexes in PPI networks. Experiment results support that our method CAG can detect protein complexes effectively. Validation by biological information, namely co-localisation and gene ontology semantic similarity score reveals that the complexes predicted by our method has high biological relevance. We also give a comparison of our method with four other popular methods in the field.



In-silico analysis of marker genes from gene expression data of solanaceous plants responsible for various abiotic stresses
Understanding the responses of plant against any environmental condition requires the expression analysis of transcriptome data. The present work focused on identifying the group of genes of Solanum tuberosum, differentially expressed in different abiotic stresses. The public database has assessed for the gene expression data in response to cold, heat and salt stresses, respectively. Furthermore, the common genes considered as marker genes, responding to all three abiotic conditions were analysed. The gene ontology classification of the marker genes and their visualisation in metabolic pathway was also analysed. The genes responsible for kunitz-type protease inhibitor precursor were found to be up-regulated, whereas the genes encoding lipid transfer protein showed down-regulation. These marker genes may be studied for further validation to see their role in stress responses to the medicinally important plants of solanaceae family.



A novel approach to knowledge discovery and representation in biological databases
Extraction of motifs from biological sequences is among the frontier research issues in bioinformatics, with sequential patterns mining becoming one of the most important computational techniques in this area. A number of applications motivate the search for more structured patterns and concurrent protein motif mining is considered here. This paper builds on the concept of structural relation patterns and applies the concurrent sequential patterns (ConSP) mining approach to biological databases. Specifically, an original method is presented using support vectors as the data structure for the extraction of novel patterns in protein sequences. Data modelling is pursued to represent the more interesting concurrent patterns visually. Experiments with real-world protein datasets from the UniProt and NCBI databases highlight the applicability of the ConSP methodology in protein data mining and modelling. The results show the potential for knowledge discovery in the field of protein structure identification. A pilot experiment extends the methodology to DNA sequences to indicate a future direction.



Identification of potential biomarkers in nasopharyngeal carcinoma based on protein interaction analysis
Nasopharyngeal carcinoma (NPC) is malignant tumour that strongly related to Epstein-Barr virus infection. Several methods are available for diagnosis but it only indicates the viral titre. This research aims to identify new potential biomarker and those contributions in NPC signalling pathway. Biomarker was identified by topological analysis, modularity analysis and functional analysis using Cytoscape 3.2.1. Furthermore, biomarkers' candidate expression was confirmed by microarray data from NCBI and analyzed by non-paired t-test. The results showed four potential biomarkers with the highest value in each parameter of topological analysis such as RPA1, USP7, UBC and TERF2, but only RPA1 included in protein module with the highest score of 4.526, while UBC and TERF2 involved in protein module with lower score. Moreover, RPA1 has high expression in NPC samples (p < 0.05; FC = 1.07) and mainly related to cell cycle pathway. This study might help to understand the NPC mechanism and develop an appropriate treatment.



Computational approach to reveal the modulation of Wnt and TGF-β signalling induced by Solenopsin B, an ant venom alkaloid
Recently, therapeutic prospect of solenopsin A was reported with emphasis on the inhibition of angiogenesis by antagonising Akt and inhibiting insulin-mediated PI3K activation. In this study, we attempt to computationally predict the molecular genes and pathways altered by solenopsin B using microarray data. Functional analysis of differentially expressed genes i.e., gene ontology and pathway enrichment analysis using bioinformatics tools specifically indicated the gene-level variations leading to down-regulation in Wnt, ErbB and TGF-β signalling pathways.



Strain-based approach for fatigue crack propagation simulation of the 6061-T651 aluminium alloy
Fatigue crack growth models based on elastic-plastic stress-strain histories, at the crack tip vicinity, and strain-life damage models have been proposed. The UniGrow model is a particular case of fatigue crack propagation models. The residual stresses developed at the crack tip play a central role in these models, since they are used to assess the actual fatigue crack driving force, taking into account mean stress and loading sequential effects. The performance of the UniGrow model is assessed based on available experimental constant amplitude crack propagation data, derived for the 6061-T651 aluminium alloy. Key issues in fatigue crack growth prediction, using the UniGrow model, in particular the residual stresses evolution, are discussed. Using available strain-life data, it was possible to model the fatigue crack propagation behaviour for the AA6061-T651, taking into account the stress R-ratio effects. A satisfactory agreement was found, between the predictions and the experimental crack propagation data.



Security awareness and the use of location-based services, technologies and games
Rapid expansion and development in the modern mobile technology market has created an opportunity for the use of location-based technologies and games. Because of this fast expanding market and new technology, it is important to be aware of the implications this expansive technology could have on computer security. This paper will endeavour to measure the impact of location-based technologies and games on the security awareness of first- to fourth-year computer science university students. A questionnaire, posted on the web, and completed by computer science students from different year groups, was used to collect the data for this study. The major results of this study are the following: there is a difference in the security awareness of students who use and play location-based services, technologies and games and those who do not. This study also determined that the computer science students are cautious of security implications although they do not take preventative measures.



An investigation into the forensic implications of the Windows 10 operating system: recoverable artefacts and significant changes from Windows 8.1
With the release of Microsoft's latest operating system, Windows 10, forensic investigators must examine it in order to determine the changes implemented from Windows 8.1 and the addition of new artefacts. This study is an analysis of Windows 10 and its new features in order to distinguish these artefacts. The tools used include: VMware Fusion, FTK Imager, Process Monitor, Process Explorer, ESEDatabase View and Registry Explorer. The paper also determines if artefacts have changed in Windows 10 in comparison to the previous version of Windows, Windows 8.1. When comparing the two it was found that many of the pre-existing artefacts found within Windows 8.1 are still present in Windows 10. Slight differences are noted in the way prefetch files are compressed and also the thumbnail databases. Significant artefacts related to the new features in Windows 10 are also reported.



An evidence collection and analysis of Windows registry
Cyber crimes are committed internally or externally. Malwares and remote access are the means of committing cyber crimes externally, whereas the trusted insider in an organisation causes industrial espionage internally. On the Windows system, the registry is a source of evidence against the cyber criminal as it maintains the details of the activity on the system. The digital forensic investigation of the Windows registry helps in collecting forensic information relevant to the case. The registry maintains a very large amount of system and user related information. In order to gather the potential evidence about the malicious activities of the user, the forensic investigator is needed to search the entire registry; resulting in the wastage of the time and the effort. This raises the need for an evidence collection and analysis methodology to identify, extract and analyse the evidence specifically related to the user activities on the system. After considering the existing research, this paper suggests a framework with the improved evidence collection and analysis methodology to aid in the process of digital forensic investigation of registry for identifying the potential malicious insider.



Embedding digital watermark in one-dimensional signals using wavelet and Schur decomposition
An efficient, robust and secure audio watermarking algorithm which can hide large number of watermarking bits without perceptually affecting the quality of the audio signal is presented in this paper. The proposed algorithm has been designed using Schur decomposition of wavelet coefficients to achieve the optimal balance between conflicting design parameters of audio watermarking. Schur decomposition makes the proposed method significantly robust against challenging signal processing attacks and discrete wavelet transform provides a good opportunity for the accommodating very high watermarking payload without affecting the perceptual quality. The choice of these two domains complements each other in addressing the contradictory design requirements of watermarking. Experimental results indicate that this algorithm is highly perceptually transparent and have an excellent subjective audible quality at 480 bps embedding capacity. This algorithm has shown very good robustness to the challenging synchronisation attacks like compression and various signal processing attacks at very high payload without affecting the audible quality of the signal. The computation time of the proposed algorithm is also found to be very less making it suitable for real time applications.



Encryption scheme classification: a deep learning approach
Encryption has an important role in protecting cyber assets. However the use of weak encryption algorithms is a vulnerability that may be exploited. When exploited, detecting this vulnerability from encrypted data is a very difficult task to undertake. This research explores the use of recent advancement in machine learning algorithms specifically deep learning algorithms to classify encryption schemes based on entropy measurements of encrypted data with no feature engineering. Past research works using various machine learning algorithms have failed to achieve good accuracy results in classification. The research entails applying popular encryption algorithms with block cipher modes over the image dataset from CIFAR10. Two ImageNet winning convolutional neural network deep learning models were used to perform the classification. Transfer learning and layer modification were applied to evaluate the classification effectiveness. This research concludes that deep learning algorithms can be used to perform such classification where other algorithms have failed.



Comments on 'An improved authentication scheme for mobile satellite communication systems'
Recently, Lee et al. (2012) proposed an authentication scheme for satellite communication systems. Then Zhang et al. (2015) found that their scheme is vulnerable to smart card loss attack, denial of service attack and replay attack. In addition, they proposed an improved authentication scheme for satellite communication systems, and claimed resistance against these attacks. Nevertheless, in this paper, we show that the Zhang et al.'s (2015) scheme is as insecure as the original protocol against the denial of service attack. Then an improved version is proposed to avoid this security flaw. Finally, the security, reliability and performance analysis of the improved protocol are given. It demonstrates that the improved version meets the security requirements and has lower computation costs, which is more suitable for mobile satellite communication systems.



k-hop neighbour knowledge-based clustering in CRN under opportunistic channel access
Cognitive radio networks (CRNs) enables cognitive users (CUs) to use available spectrum of primary users (PUs) using innovative techniques. On appearance of PUs, the available channel of CUs at different position may have different available channels which changes dynamically over time. Due to temporal and spatial variations of opportunistic channel availability, to ensure connectivity and robustness in CRN is of great research interests. We have proposed novel approach of k-hop neighbour knowledge-based clustering algorithm which guarantees robustness in CRN and converges in O(n2m), for n CUs and m clusters in the network. We have evaluated for varying 3-hop, 4-hop, … k-hop through simulation which shows that our scheme achieves 25%-30% more numbers of outward common channels and outperforms in terms of inner common channel index, outward common channel index, number of isolated nodes, throughput and frequency of route discovery compared to the competitive approaches.



Load balanced routing scheme for MANETs with power and delay optimisation
Innovative research on mobile ad hoc networks (MANETs) and their wide applications have created a revolution of today's wireless communication. Efficient routing for dynamically changing network and unstable wireless medium is a challenging task. The conventional AODV routing protocol works on selecting the hop count-based shortest path while delivering the packet towards the destination. This often creates uneven load distribution among network nodes. As a result, any central node hopelessly carries heavier loads that significantly deplete node energy with higher packet loss and delay in processing. In this work, a protocol based on power and delay named as power and delay optimised AODV (PDO-AODV) has been presented which is based on load balanced routing strategy in AODV that improves the quality of service for MANETs. The proposed approach takes neighbouring node's power and delay into consideration for selecting load balanced path. Simulation results manifest that the proposed PDO-AODV achieves improved performance as compared to existing AODV routing protocol in terms of packet drop, network delay, throughput, and retransmission attempts.



An effective resource management in cloud computing
Provision of resources must be provided such that all resources are made available to user's request in efficient manner to satisfy their needs. Resource allocation in virtualised environment should provide elasticity. When the workload increases, existing approaches cannot respond to the growing performance in an effective way. This may lead to the violation of service level agreement (SLA) which will decrease the quality of service (QoS). Existing methods cannot take an accurate decision on the allocation of resources in an optimal way and are not predictive in nature. Before any problems occur, they cannot take a precaution on resource management. Therefore, a framework is used to ensure effective resource management. This framework uses rough set algorithm in order to make an accurate decision on the allocation of resources. Variation in workloads is adapted by considering new parameters like type of application, garbage processing policy and internal application resources.



Compiling, verifying and simulating dynamic software architectures using ANTLR and coloured-ADL
The concept of reconfigurable and dynamic software architecture (DSA) occupies today an important place in the field of software engineering. As result, several architecture description languages (ADLs) and approaches have been proposed for describing DSA in the highest level of abstraction. However, most of these works present theoretical solutions without giving an idea on the execution of final systems at run time. In this paper, we propose a new DSA called coloured software architecture (CSA) based on two concepts coloured operation and coloured connector. Then, we propose a new ADL called coloured-ADL and implement a compiling, verification and simulation tool dedicated to CSA. The simulation of system instances, derived from a CSA, is mainly used to explain coloured-ADL and evaluate the reliability of the simulated system. On the other hand, the verification is focused on checking a new defined safety property called architectural stack overflow (ASO). A safe CSA should be free of ASO violation property. To check a CSA, the verification uses also finite state processes (FSP) and labelled transition state (LTS) to expect this kind of property. We illustrate our propositions through two case studies from the literature.



A fuzzy-based mechanism for delay window adjustment to improve TCP performance in wireless multi-hop networks
TCP protocol was initially designed for wired networks to guarantee reliable data transfer. In wired networks, TCP assumes congestion if data packets are dropped. However, this assumption does not hold when the end-to-end path includes wireless links, where, high bit error rate (BER), shared medium and dynamic nature of the wireless channel may cause packet loss. This results in a severe degrade in performance of standard TCP in wireless networks. On the other hand, TCP cannot be changed fundamentally due to the large base of installation in the wired network. Delayed-ACK generation at the TCP receiver has been proposed to mitigate collision rate within wireless network. However, choosing appropriate delayed window size is an important issue. In this article, we propose a new delayed-ACK approach based on a fuzzy controller to improve TCP performance in wireless networks. The major advantage of our mechanism is reducing the overhead and the collision rate in multi-hop wireless networks. We also compare our proposal with some of well-known studies. Simulations results confirm good performance of our mechanism.



Route recommendation system to support multiple destinations and multiple routes to minimise road congestion
Multiple destinations routing is important for car navigation when a user requests for a service instead of a specific destination. Existing navigation systems can suggest k shortest routes to a destination, where the cost is defined in terms of distance, travel time, or other parameter. In case of a special event like a festival or an unforeseen situation such as tsunami, a large number of users would require the same service, e.g., a parking lot or a safe shelter, not a specific destination. When service points are known, the proposed algorithm first computes multiple near-optimal routes to those destinations, and then optimally distributes the traffic along those routes based on respective costs. This reduces congestion both on road network as well as at service points. Our proposed algorithm has been simulated with real-life traffic data on real city road network, showing encouraging results compared to conventional routing.



An enhanced anonymous remote user authentication scheme using smart card in insecure communication channel
To prove the legitimacy among the users and to ensure the secure communication over the insecure network the remote user authentication using smart card and password is one of the simplest and efficient mechanisms. In this context, Kumari et al. proposed an improved remote user authentication scheme and claimed, their scheme is more user friendly, can resist various possible attacks at very low cost than existing ones. Unfortunately, during our research we have found this is not the case, their scheme cannot sustain against all those attacks for which the scheme was meant. In this paper, we have pointed out that their scheme not only can suffer from user anonymity problem but also fails to resist against offline password guessing attack, server masquerading attack and can create the risk of session key agreement too. Then, while retaining the original merits of their scheme we propose an efficient and modified scheme to overcome from aforesaid weaknesses, but at low computational cost.