Recent zbMATH articles in MSC 68M20https://www.zbmath.org/atom/cc/68M202021-04-16T16:22:00+00:00WerkzeugCache miss estimation for non-stationary request processes.https://www.zbmath.org/1456.680202021-04-16T16:22:00+00:00"Olmos, Felipe"https://www.zbmath.org/authors/?q=ai:olmos.felipe"Graham, Carl"https://www.zbmath.org/authors/?q=ai:graham.carl"Simonian, Alain"https://www.zbmath.org/authors/?q=ai:simonian.alain-dSummary: The goal of the paper is to evaluate the miss probability of a Least Recently Used (LRU) cache, when it is offered a non-stationary request process given by a Poisson cluster point process. First, we construct a probability space using Palm theory, describing how to consider a tagged document with respect to the rest of the request process. This framework allows us to derive a fundamental integral formula for the expected number of misses of the tagged document. Then, we consider the limit when the cache size and the arrival rate go to infinity in proportion, and use the integral formula to derive an asymptotic expansion of the miss probability in powers of the inverse of the cache size. This enables us to quantify and improve the accuracy of the so-called \textit{Che approximation}.Modeling the signaling overhead in host identity protocol-based secure mobile architectures.https://www.zbmath.org/1456.680192021-04-16T16:22:00+00:00"Faigl, Zoltán"https://www.zbmath.org/authors/?q=ai:faigl.zoltan"Telek, Miklós"https://www.zbmath.org/authors/?q=ai:telek.miklosSummary: One of the key issues in recent mobile telecommunication is to increase the scalability of current packet data networks. This comes along with the requirement of reducing the load of signaling related to establishment and handover procedures. This paper establishes an analytical model to analyze the signaling overhead of two different secure mobile architectures. Both are based on the Host Identity Protocol for secure signaling and use IPsec for secure data transport. The paper presents the cumulative distribution function and moments of security association periods and calculates the rate of different signaling procedures in a synthetic network model assuming M/G/\(\infty\) process for session establishments between end-nodes. Using the model, it is shown that the Ultra Flat Architecture has significant performance gains over the traditional End-to-End HIP protocol in large-scale mobile environment in the access networks and toward the rendezvous service, but performs worse in the core transport network between the GWs.Performance analysis of backup-task scheduling with deadline time in cloud computing.https://www.zbmath.org/1456.680142021-04-16T16:22:00+00:00"Hashimoto, Kyosuke"https://www.zbmath.org/authors/?q=ai:hashimoto.kyosuke"Masuyama, Hiroyuki"https://www.zbmath.org/authors/?q=ai:masuyama.hiroyuki"Kasahara, Shoji"https://www.zbmath.org/authors/?q=ai:kasahara.shoji"Takahashi, Yutaka"https://www.zbmath.org/authors/?q=ai:takahashi.yutakaSummary: In large-scale parallel job processing for cloud computing, a huge task is divided into subtasks, which are processed independently on a cluster of machines called workers. Since the task processing lasts until all the subtasks are completed, a slow worker machine makes the overall task-processing time long, degrading the task-level throughput. In order to alleviate the performance degradation, MapReduce conducts backup execution, in which the master node schedules the remaining in-progress subtasks when the whole task operation is close to completion. In this paper, we investigate the effect of backup tasks on the task-level throughput. We consider the backup-task scheduling in which a backup subtask for a worker starts when the subtask-processing time of the worker reaches the deadline time. We analyze the task-level processing-time distribution by considering the maximum subtask-processing time among workers. The task throughput and the amount of all the workers' processing times are derived when the worker-processing-time (WPT) follows a hyper-exponential, Weibull, and Pareto distribution. We also propose an approximate method to derive performance measures based on extreme value theory. The approximations are validated by Monte Carlo simulation. Numerical examples show that the performance improvement by backup tasks significantly depends on workers' processing time distribution.Scheduling jobs with a V-shaped time-dependent processing time.https://www.zbmath.org/1456.900782021-04-16T16:22:00+00:00"Sedding, Helmut A."https://www.zbmath.org/authors/?q=ai:sedding.helmut-aSummary: In the field of time-dependent scheduling, a job's processing time is specified by a function of its start time. While monotonic processing time functions are well-known in the literature, this paper introduces non-monotonic functions with a convex, piecewise-linear V-shape similar to the absolute value function. They are minimum at an ideal start time, which is the same for all given jobs. Then, the processing time equals the job's basic processing time. Earlier or later, it increases linearly with slopes that can be asymmetric and job-specific. The objective is to sequence the given jobs on a single machine and minimize the makespan. This is motivated by production planning of moving car assembly lines, in particular, to sequence a worker's assembly operations such that the time-dependent walking times to gather materials from the line-side is minimized. This paper characterizes the problem's computational complexity in several angles. NP-hardness is observed even if the two slopes are the same for all jobs. A fully polynomial time approximation scheme is devised for the more generic case of agreeable ratios of basic processing time and slopes. In the most generic case with job-specific slopes, several polynomial cases are identified.On the complexity of constructing a minmax regret solution for the two-machine flow shop problem under the interval uncertainty.https://www.zbmath.org/1456.900792021-04-16T16:22:00+00:00"Shafransky, Yakov"https://www.zbmath.org/authors/?q=ai:shafransky.yakov-m"Shinkarevich, Viktor"https://www.zbmath.org/authors/?q=ai:shinkarevich.viktorSummary: We prove the NP-hardness of constructing a minmax regret solution for the two-machine flow shop problem under the interval uncertainty of the job processing times. The problem complexity status has been an open question for over the past 20 years. We establish the NP-hardness of this problem using a so-called alternative scheme for proving the NP-hardness of optimization problems. Also, we show that the problem is non-approximable in polynomial time.New results for an open time-dependent scheduling problem.https://www.zbmath.org/1456.900702021-04-16T16:22:00+00:00"Gawiejnowicz, Stanisław"https://www.zbmath.org/authors/?q=ai:gawiejnowicz.stanislaw"Kurc, Wiesław"https://www.zbmath.org/authors/?q=ai:kurc.wieslawSummary: We present several new results for a single machine time-dependent scheduling problem of minimizing the total completion time of a set of linearly deteriorating jobs with unit basic processing times. First, we show new properties of cyclic transformations of V-shaped sequences for this problem. Next, applying the results, we prove a new necessary condition of schedule optimality for the considered problem, which decreases the previous bound on the cardinality of the set containing all possible optimal schedules by a multiplicative factor which is at most proportional to the reciprocal of the square root of the number of jobs. Finally, we compare the strength of the new and the previous necessary conditions by estimation of the numbers of schedules satisfying the respective conditions.Approximation algorithms for energy-efficient scheduling of parallel jobs.https://www.zbmath.org/1456.900722021-04-16T16:22:00+00:00"Kononov, Alexander"https://www.zbmath.org/authors/?q=ai:kononov.aleksandr"Kovalenko, Yulia"https://www.zbmath.org/authors/?q=ai:kovalenko.yulia-viktorovnaSummary: In this paper, we consider the homogeneous scheduling on speed-scalable processors, where the energy consumption is minimized. While most previous works have studied single-processor jobs, we focus on rigid parallel jobs, using more than one processor at the same time. Each job is specified by release date, deadline, processing volume and the number of required processors. Firstly, we develop constant-factor approximation algorithms for such interesting cases as agreeable jobs without migration and preemptive instances. Next, we propose a configuration linear program, which allows us to obtain an ``almost exact'' solution for the preemptive setting. Finally, in the case of non-preemptive agreeable jobs with unit-work operations, we present a three-approximation algorithm by generalization of the known exact algorithm for single-processor jobs.On a multiserver queueing system with customers' impatience until the end of service under single and multiple-vacation policies.https://www.zbmath.org/1456.602432021-04-16T16:22:00+00:00"Kadi, Mokhtar"https://www.zbmath.org/authors/?q=ai:kadi.mokhtar"Bouchentouf, Amina Angelika"https://www.zbmath.org/authors/?q=ai:bouchentouf.amina-angelika"Yahiaoui, Lahcene"https://www.zbmath.org/authors/?q=ai:yahiaoui.lahceneSummary: This paper deals with a multiserver queueing system with Bernoulli feedback and impatient customers (balking and reneging) under synchronous multiple and single vacation policies. Reneged customers may be retained in the system. Using probability generating functions (PGFs) technique, we formally obtain the steady-state solution of the proposed queueing system. Further, important performance measures and cost model are derived. Finally, numerical examples are presented.Analysis of MAP/PH(1), PH(2)/2 queue with Bernoulli schedule vacation, Bernoulli feedback and renege of customers.https://www.zbmath.org/1456.602372021-04-16T16:22:00+00:00"Ayyappan, G."https://www.zbmath.org/authors/?q=ai:ayyappan.govindasamy"Gowthami, R."https://www.zbmath.org/authors/?q=ai:gowthami.rSummary: A classical queueing model with two servers in which the inter arrival times follow Markovian arrival process, the service times are phase type distributed and the remaining random variables are exponentially distributed is studied in this paper. The resulting QBD process is investigated in the steady state by employing matrix-analytic method. We have also done the busy period analysis of our model and discussed about the waiting time distribution for our system. Some of the performance measures are provided. Finally, a few numerical and graphical examples are given.Performance evaluation of components using a granularity-based interface between real-time calculus and timed automata.https://www.zbmath.org/1456.680182021-04-16T16:22:00+00:00"Altisen, Karine"https://www.zbmath.org/authors/?q=ai:altisen.karine"Liu, Yanhong"https://www.zbmath.org/authors/?q=ai:liu.yanhong-a"Moy, Matthieu"https://www.zbmath.org/authors/?q=ai:moy.matthieuSummary: To analyze complex and heterogeneous real-time embedded systems, recent works have proposed interface techniques between real-time calculus (RTC) and timed automata (TA), in order to take advantage of the strengths of each technique for analyzing various components. But the time to analyze a state-based component modeled by TA may be prohibitively high, due to the state space explosion problem. In this paper, we propose a framework of granularity-based interfacing to speed up the analysis of a TA modeled component. First, we abstract fine models to work with event streams at coarse granularity. We perform analysis of the component at multiple coarse granularities and then based on RTC theory, we derive lower and upper bounds on arrival patterns of the fine output streams using the causality closure algorithm of
\textit{M. Moy} and \textit{K. Altisen} [Lect. Notes Comput. Sci. 6015, 358--372 (2010; Zbl 1284.68092)].
Our framework can help to achieve tradeoffs between precision and analysis time.
For the entire collection see [Zbl 1415.68021].Near-linear-time approximation algorithms for scheduling a batch-processing machine with setups and job rejection.https://www.zbmath.org/1456.900772021-04-16T16:22:00+00:00"Ou, Jinwen"https://www.zbmath.org/authors/?q=ai:ou.jinwenSummary: In this paper we study a single batch-processing machine scheduling model. In our model, a set of jobs having different release dates needs to be scheduled onto a single machine that can process a batch of jobs simultaneously at a time. Each batch incurs a fixed setup time and a fixed setup cost. The decision maker may reject some of the jobs by paying penalty cost so as to achieve a tight makespan, but the total rejection penalty cost is required to be no greater than a given value. Our model extends several existing batch-processing machine scheduling models in the literature. We present efficient approximation algorithms with near-linear-time complexities.Analyzing the models of systems with heterogeneous servers.https://www.zbmath.org/1456.602452021-04-16T16:22:00+00:00"Melikov, A. Z."https://www.zbmath.org/authors/?q=ai:melikov.agassi-z"Ponomarenko, L. A."https://www.zbmath.org/authors/?q=ai:ponomarenko.leonid-a"Mekhbaliyeva, E. V."https://www.zbmath.org/authors/?q=ai:mekhbalyeva.e-vSummary: The mathematical model of a queueing system with heterogeneous servers, without queues, and with two types of requests is investigated. High-priority requests are processed in fast servers while low-priority calls are processed in slow servers. If all servers in some group are busy, then reassigning of requests to another group is allowed. Reassigning is based on random schemes and reassignment probability depends on the number of busy servers in appropriate group. Exact and approximate methods are developed for the analysis of characteristics of the system. Explicit approximate formulas to calculate the approximate values of characteristics are proposed.A new tool for the performance analysis of massively parallel computer systems.https://www.zbmath.org/1456.680212021-04-16T16:22:00+00:00"Stefanek, Anton"https://www.zbmath.org/authors/?q=ai:stefanek.anton"Hayden, Richard"https://www.zbmath.org/authors/?q=ai:hayden.richard-a"Bradley, Jeremy"https://www.zbmath.org/authors/?q=ai:bradley.jeremy-tSummary: We present a new tool, GPA, that can generate key performance measures for very large systems. Based on solving systems of ordinary differential equations (ODEs), this method of performance analysis is far more scalable than stochastic simulation. The GPA tool is the first to produce higher moment analysis from differential equation approximation, which is essential, in many cases, to obtain an accurate performance prediction. We identify so-called switch points as the source of error in the ODE approximation. We investigate the switch point behaviour in several large models and observe that as the scale of the model is increased, in general the ODE performance prediction improves in accuracy. In the case of the variance measure, we are able to justify theoretically that in the limit of model scale, the ODE approximation can be expected to tend to the actual variance of the model.
For the entire collection see [Zbl 1415.68021].