ORIGINAL_ARTICLE
Application of design of experiments for calibration of a flight simulator
The application of a flight simulator with six Degree of Freedom (6DOF)) in design and analysis of rocket systems is essential. The calibration of a 6DOF flight simulator is a challenging problem. This paper focuses on the calibration of a flight simulator and intends to answer the following questions:
1) Does experimental design have significant impact on calibration quality? If so, which experimental design is suitable for calibration of a flight simulator?
2) How to determine the suitable number of runs for calibration?
3) Could DOE be used for estimation of calibration parameters? If so, how is it applied?
The experiments have been conducted using a 6DOF simulator. Our investigation shows that the experimental design has significant impact on the quality of calibration. The suitable design for calibration of the 6DOF simulator is the triangular design. A new method for determining the number of runs is proposed based on entropy function. It is also used for estimation of the calibration parameters. One advantage of the proposed method is the calibration of 6DOF simulator by considering several criteria such as range error, azimuth error and combination of these errors. The quality of calibration has been investigated by hypothesis testing.
https://jieng.ut.ac.ir/article_21664_bf30233a387f086e6297289592eb51bb.pdf
2010-03-21T11:23:20
2020-05-26T11:23:20
Ali
Babapour Atashgah
babapour@mailinator.com
true
1
AUTHOR
Abas
Seifi
aseifi@aut.ac.ir
true
2
AUTHOR
Morteza
Behbahaninejad
behbahaninejad@mailinator.com
true
3
AUTHOR
ORIGINAL_ARTICLE
Developing project scheduling model with makespan and robustness objectives
The research on project scheduling has widely expanded over the last few decades. The vast majority of these research efforts focus on exact and suboptimal procedures for constructing a workable schedule, assuming complete information and a static deterministic problem environment. Project activities are scheduled subject to both technological precedence constraints and resource constraints, mostly under the objective of minimizing the project duration. The resulting schedule serves as the baseline for the execution of the project. During execution, however, the project is subject to considerable uncertainty, which may lead to numerous schedule disruptions. It is surprising to see that, given the large amount of projects that have finished late during the last decades, management still fails to quote accurate project due dates. This is problematic because virtually all organizations use their project plans not only as tools with which to manage the projects, but also as a foundation to make delivery commitments to clients. Therefore, a vitally important purpose of project plans and effective scheduling methods is to enable organizations to estimate completion dates of the corresponding projects. A project schedule also serves as a baseline for material procurement, contacts with subcontractors, coordination of internal resources and corrective actions the ultimate goal which is more akin to deterministic scheduling, is to make scheduling and resource allocation decisions that will allow quoting a due date that should be as low as possible. Such optimizations decisions need not only be taken before the project is actually started, but also scheduling and resource allocation should also be able during project execution in order protect promised dates from any sources of uncertainty that may occur. Additionally, if advance knowledge about the nature of the uncertainty in the project is available, it is desirable that the project schedule be robust to disruptions that may arise. One of the important approaches in this issue is constructing robust project schedule. Generally, this approach tries to construct robust project baseline schedule by considering uncertainty in such a way that any variations cannot make disruption in it as far as possible. In this article, we introduce the concept of schedule robustness and then we develop a bi-objective resource constrained project scheduling model. We consider the objectives of robustness maximization along with makespan minimization. Then, we present a tabu search algorithm that operates on surrogate functions. This algorithm is developed in order to generate an approximate set of efficient solutions. Many random project scheduling problems are generated, and then solved by the algorithm. Finally, the efficiency of the algorithm and the robustness surrogate functions are evaluated by simulation. Finally, the results show efficiency of the algorithm and developed surrogate function.
https://jieng.ut.ac.ir/article_21665_91fb38de620421174569f8aba3c1fad2.pdf
2010-03-21T11:23:20
2020-05-26T11:23:20
Iman
Bossaghzade
bossaghzade@mailinator.com
true
1
AUTHOR
Seyed Reza
Hejazi
rehejazi@ce.iut.ac.ir
true
2
AUTHOR
Amir Moosa
Ehsan
ehsan@mailinator.com
true
3
AUTHOR
ORIGINAL_ARTICLE
A Lagrangean Relaxation- genetic algorithm heuristic for multi-product multi-stage and multi-period lot sizing problem with limited resource capacity
In this paper, a hybrid method for limited resource allocation and leveling in complex multi-stage, multi-product and multi-period production planning problems with aim of lot-size determination and total cost minimization has been proposed. This problem consists of multiple products with sequential production processes that are produced in different periods to meet the customers demand. By determining decision variables, production capacity of machines and customers demand, an integer linear program is developed to minimize the total set-up, inventory holding and production cost. A three-stage approach has been developed to solve the problem. In the first stage, the primary problem is divided into several sub-problems using a heuristic algorithm based on the limited resource Lagrangean multipliers. In this case, each sub-problem could be solved using more simple methods. In the second stage a new approach is proposed to solve these sub-problems combining the genetic algorithm with a neighborhood search technique. In the third stage resource leveling is performed among sub-problems to obtain a better solution. In this case, lot-size for each product is determined during the planning periods. This paper's objectives have been evaluated and verified through several empirical experiments.
https://jieng.ut.ac.ir/article_21666_12cead46886943d51e7026edc02111a8.pdf
2010-03-21T11:23:20
2020-05-26T11:23:20
Hybrid Genetic Algorithm
Integer linear programming
Lagrangean multipliers
Neighborhood search techniques
Production Planning
Resource allocation and leveling
Hasan
Khadmi Zare
khadmi@ut.ac.ir
true
1
AUTHOR
S.M.T.
Fatemi Ghomi
fatemi@aut.ac.ir
true
2
AUTHOR
Behrooz
Karimi
b.karimi@aut.ac.ir
true
3
AUTHOR
Masoud
Jenabi
jenabi@mailinator.com
true
4
AUTHOR
Abbas
Raad
raad@mailinator.com
true
5
AUTHOR
ORIGINAL_ARTICLE
Gold price forecasting using hybrid artificial neural networks with fuzzy regression model
Artificial Neural Networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite of all advantages cited for artificial neural networks, they have data limitation and need to the large amount of historical data in order to yield accurate results. Therefore, their performance in incomplete data situations is not satisfactory. Although, no definite rule exists for the requirement of the sample size for a given problem, the amount of data for the network training depends on the network structure, the training method, and the complexity of the particular problem or the amount of noise in the data on hand.
However, collecting the necessary data is very expensive and time-consuming. Therefore, due to rapid changes in real situations and financial and economic systems, especially, forecasting in these environments is needed for methods that are also efficient with less available data. Since in fuzzy forecasting models, we use fuzzy numbers instead of crisp values, this method requires fewer observations and it is suitable under incomplete data conditions. However, their performance is not always satisfactory, especially, when the training data set includes a significant difference or outlying case.
Using hybrid models or combining several models has become a common practice to improve the forecasting accuracy and the literature on this topic has expanded dramatically. In this paper, fuzzy regression models are applied to construct a new hybrid model of ANN in order to yield more accurate model than traditional neural networks; especially, for cases where inadequate historical data is available. In our proposed model, fuzzy numbers are used as parameters values of artificial neural networks (weights and biases), instead of using crisp values. In order to show the appropriateness and effectiveness of the proposed model for time series forecasting, the proposed model has been applied to gold price forecasting problem and their performance has been compared with their components. Empirical results indicate that the proposed model is an effective method in order to improve forecasting accuracy. Therefore, it can be applied as an appropriate alternative model for forecasting tasks, especially when higher forecasting accuracy is needed.
https://jieng.ut.ac.ir/article_21667_6cb465f7793dcdb72183b9d06a39f749.pdf
2010-03-21T11:23:20
2020-05-26T11:23:20
Artificial neural networks (ANNs)
fuzzy regression
Hybrid models
Time series forecasting
Mehdi
Khashei
khashei@in.iut.ac.ir
true
1
AUTHOR
Mehdi
Bijari
bijari@mailinator.com
true
2
AUTHOR
ORIGINAL_ARTICLE
Statistical analysis of relationship between key success factors of Six Sigma
In Iranian companies
Six Sigma methodology is one of the most famous effective problem solving approaches that helps organizations to achieve their goals. Despite the variety of its positive advantage, most companies have failed to implement Six Sigma successfully. The reason for this includes lack of the organizations attention to success or failure factors when implement Six Sigma projects. Hence, while performing Six Sigma, it is necessary to focus on the critical Success Factors (CSFs) of Six Sigma and analyze the relationship between them.
This study based on a comprehensive literature review, focused on the importance of the CSFs and their roles in the successful deployment of Six Sigma within the organization and they proposed a framework to explain the relationship of among these factors. Empirical data are collected through a survey to Iranian manufacturing companies that have an experience in implementing Six Sigma projects. A total of 105 Six Sigma experts responded to the survey. The test of the structural model (Factor Analysis, Structural Equation Model) supports the proposed hypotheses. The findings provide empirical support for relationship among Six Sigma CSFs and offer a flow chart for Six Sigma adoption. Furthermore this research refines our understanding of relationship among CSFs. Also further research is discussed.
https://jieng.ut.ac.ir/article_21668_6f2a2bd5a21834cdd2777b930c1c5e58.pdf
2010-03-21T11:23:20
2020-05-26T11:23:20
Critical Success Factors
Six Sigma
Structural Equation Modeling
Shesam
Zegordi
zegordi@modares.ac.ir
true
1
AUTHOR
Samaneh
Bagheri
bagheri@mailinator.com
true
2
AUTHOR
Javad
Attarian
attarian@mailinator.com
true
3
AUTHOR
ORIGINAL_ARTICLE
A new DEA model for finding most efficient DMU with imprecise data
Data Envelopment Analysis (DEA) is a widely recognized approach for evaluating the efficiencies of decision making units (DMUs). Because of easy and successful application and case studies, DEA has gained much attention and widespread use by business and academy researchers. The conventional DEA models (e.g. BCC and CCR) make an assumption that input and output data are exact values on a ratio scale. However, in real cases it is not feasible to define and calculate an exact value for some inputs and outputs. Recently, researchers addressed the problem of imprecise data in DEA, in its general form. The term ‘‘imprecise data’’ reflects the situation where some of the input and output data are only known to be placed within bounded intervals (interval numbers) while other data are known only up to an order. This paper, proposes a new DEA model which allows user to find most efficient DMU, considering imprecise data (interval and ordinal data). As an advantage, proposed model in efficient and find most efficient DMU by solving one model, while considering imprecise data. Moreover, applicability of proposed model is illustrated in a supplier selection problem. In this case, 18 suppliers with imprecise data were evaluated and most efficient one has been selected. Finally, results of proposed model were compared with a previously published model in literature.
https://jieng.ut.ac.ir/article_21669_bb75778fe6a47685fb7aee1a63e5d79b.pdf
2010-03-21T11:23:20
2020-05-26T11:23:20
Data Envelopment Analysis
imprecise data
Interval data
Ordinal data
Supplier Selection
Babak
Sohrabi
bsohrabi@ut.ac.ir
true
1
AUTHOR
Soroush
Nalchigar
nalchigar@mailinator.com
true
2
AUTHOR
ORIGINAL_ARTICLE
Using intelligent models in ranking IT complementary for new product development process
During the past decade in Iran, Information Technology (IT) has had a deep impact on economy. Many companies have invested on IT and it’s complementary. Despite a vast percentage of firm’s budget is spent on IT continues increasingly, there are evidence which has been illustrate on failure of firms in obtaining the benefits of these expenditures within expected period. This fact nevertheless has caused what may be named as productivity paradox. The ICT productivity paradox (contradictory of research on IT effects on performance in organizations with what the researchers expected) and its complementary assets (by performing such investments, it can be ensure to impact of IT on organizational performance) besides the growing of ICT in new product development project, is faced organizations with one serious question:” in a NPD (New Product Development) project, what is the investment priorities on ICT complementary in order to make the maximum performance level of a project?”. To find the answer, in this paper we present an extensive case study within research strategic units and new product development departments of Iran Khodro, and conceptual research model has been analyzed using neural networks technique. The results for a specific project show that a resource development (such as feasibility study for each product and doing financial assessment) and human resources will have the most important role in increasing project performance in three field of financial, management, and general. The management shall focus on leading and inspiring the team members of projects in order to gain project objectives, meanwhile as it can be possible the project members must be single task. Training the team and making the culture of using advanced technology is one of the other aspects of human resource activity as an IT complementary. In fact by investing on these complementary factors and using their beneficiary besides the IT, each related company would be imagining of performance increment.
https://jieng.ut.ac.ir/article_21670_a9b42c5ea4e3ee99ff39cdfb19863c5e.pdf
2010-03-21T11:23:20
2020-05-26T11:23:20
Information and communication technology
Neural Network
New product development project
Productivity Paradox
abbas
Keramati
keramati@ut.ac.ir
true
1
AUTHOR
Hasan
Haleh
haleh@mailinator.com
true
2
AUTHOR
Behdad
Banan
banan@mailinator.com
true
3
AUTHOR
Navid
Mojir
mojir@mailinator.com
true
4
AUTHOR
Ali
Derakhshani
derakhshani@mailinator.com
true
5
AUTHOR
ORIGINAL_ARTICLE
Comparison of estimators of standard deviation, mean moving range and median moving range to estimate process standard deviation using autocorrelated data
Independence of observations is a fundamental assumption in designing an X chart for individual observations. But unfortunately, the assumption of existence of independent data is not even approximately satisfied. In fact, there is one kind of autocorrelation in the data. This paper demonstrates, through simulating an AR(1) process, that the autocorrelation in the data has deep effect on X chart. Moreover, estimators of standard deviation, mean moving range and median moving range to estimate process standard deviation are introduced. Among the cited estimators, the best of them is considered to estimate process standard deviation using autocorrelated data. Finally, two new estimators are presented to estimate process standard deviation using autocorrelated data.
https://jieng.ut.ac.ir/article_21671_78ca8ed6890b36e8b5a3596c3ac966a6.pdf
2010-03-21T11:23:20
2020-05-26T11:23:20
AR(1) Process
Autocorrelation
Complete minimal sufficient statistic
Independent data
Multivariate normal
Uniform minimum variance unbiased estimator (UMVUE)
X chart
Mehdi
Kalantari
kalantari@mailinator.com
true
1
AUTHOR
S.M.T.
Fatemi Ghomi
fatemi@aut.ac.ir
true
2
AUTHOR
ORIGINAL_ARTICLE
A model for developing knowledge production function in knowledge production (case study) (Technical note)
Intangibility of knowledge and unclear relationship between inputs and outputs has caused difficulties in knowledge production control. In this paper we present a model to clarify the relationships between inputs and outputs in knowledge production and develop knowledge production functions. In order to accomplish this, we extract resources of knowledge production and product utility indices from the literature. In order to develop production functions, we apply Analytical Hierarchy Process (AHP) to find the weights of production factors for each utility index and establish linear production functions by AHP results. Then, with the practical data and applying gradient descents method, we improve the coefficients of the linear functions. A set of data is generated through the linear functions. A neural network is then trained by this set of data and gathered data from the case to find the relationship of utility indices and production factors. So, we improved the production functions by real data. We applied this method in Ghods newspaper office. Although the t test verifies the performance of our method, the gradient descents method and neural network method decrease the mean square error.
https://jieng.ut.ac.ir/article_21672_2e499035e67de170f33488465ab43dc2.pdf
2010-03-21T11:23:20
2020-05-26T11:23:20
Knowledge Management
knowledge production
Knowledge production function
Nahid
Hashemian Bojnord
hashemian@mailinator.com
true
1
AUTHOR
Mohammad Bagher
Menhaj
tmenhaj@ieee.org
true
2
AUTHOR