共查询到19条相似文献,搜索用时 187 毫秒
1.
针对具有高可靠、长寿命特点的智能电能表,应用加速退化试验方法进行可靠性评估是一种有效的方法。开展加速退化试验过程中,在高加速应力激发下,一方面可观测到智能电能表的性能退化,也可能出现智能电能表的整表失效。如何融合智能电能表加速寿命试验过程中的整表失效数据、性能退化数据转化得到的伪寿命数据,从而进行智能电能表的综合可靠性评估,是智能电能表可靠性评估急需解决的问题。本文研究提出基于贝叶斯方法的智能电能表可靠性评估方法,给出融合智能电能表的整表失效数据、伪寿命数据的数据处理方法及计算模型,探讨了伪失效数据计算方法、整表失效数据与伪失效寿命数据相容性检验方法等。 相似文献
2.
3.
4.
5.
金属化膜电容器是惯性约束聚变激光装置能源系统最重要的元器件之一,其可靠性水平对整个装置的可靠性和运行维护费用有着重要的影响。在分析金属化膜电容器失效机理的基础上,采用Wiener过程对其性能退化过程进行建模,得到了其寿命分布。在此基础上,提出了一种综合性能退化数据和寿命数据的可靠性评估方法。给出了一种评估精度的分析方法,对综合评估方法和基于性能退化数据评估方法的精度进行了分析,结果表明,综合评估方法的评估精度高于基于性能退化数据的评估方法的评估精度。 相似文献
6.
王正良 《工程物理研究院科技年报》2008,(1)
正态分布是关于均值对称的,其随机变量的取值范围从-∞-+∞。但是,许多试验数据如结构强度、应力、雷管组件的作用时间等只有正值而无负值,严格地说并不服从正态分布。如果将截尾正态分布作为其近似分布,那么就会更加符合实际情况。 相似文献
7.
装备的可靠性是完成遂行任务必备条件,对装备可靠性进行评估可为任务决策提供理论支持。目前关于装备可靠性评估方面的研究大多数都是基于概率统计学的,而概率统计的准确性受限于样本的大小,从而使得基于概率统计学的装备可靠性评估因装备样本的大小而产生不可避免的或大或小误差。为解决这一评估受样本大小制约的问题,引入逼近理想点(TOPSIS)法。同时,针对TOPSIS法受主观因素影响较大的问题,修定了该法评估指标权重及理想解的确定方法,并在评估结果中引入了“合格分数线”的概念,使得评估结果等级划分有了量化依据,从而体现出了客观性和科学性,然后构建了某装备基于该改进TOPSIS法的可靠性评估模型。最后,通过示例分析,利用MATLAB计算验证了本文方法的正确性,评估结果可为装备的使用者或指挥者提供决策依据。 相似文献
8.
9.
润滑油光谱分析是变速箱磨损状态监测的重要手段。以某重载车辆变速箱台架试验中润滑油原子发射光谱分析得到的元素成分含量检测数据为基础,通过相关性分析和磨损机理分析确定了能够反映整机磨损状态和失效特点的金属元素,并对补油、换油后的光谱检测数据进行了修正,将变速箱试验过程中特征金属元素的含量随试验时间的变化规律表示为线性函数的形式。然后,综合考虑检测数据的时序规律与离散性,利用不同时刻光谱检测数据的正态分布均值和标准差进行变速箱可靠性评估,进一步讨论了失效阈值对可靠性评估结果的影响。研究表明,重载车辆变速箱润滑油中的Cu元素浓度与其他金属元素浓度的相关性较高、绝对值较大,既能够反映变速箱整体磨损状态,又便于检测,利用多个试验样本的润滑油Cu元素浓度光谱检测时序数据,可以对重载车辆变速箱进行可靠性评估;若失效阈值取定值,则失效阈值越大,同一时刻变速箱的可靠度越高;若失效阈值取随机分布,则失效阈值的离散度越大,可靠度随时间延长而降低的趋势越缓慢。当可靠度R大于0.9时,失效阈值的均值对可靠性评估结果的影响十分显著,同一时刻变速箱的可靠度随失效阈值标准差的增大而降低。将光谱分析与概率统计学结合进行变速箱可靠性评估方法研究,扩展了光谱分析技术的应用范围,具有创新性。 相似文献
10.
王玉明 《工程物理研究院科技年报》2003,(1):139-140
最大熵可靠性评估方法存在缺陷。为此,通过严格推导给出改进的评估公式。最大熵方法主要根据强化试验得到的产品性能裕量信息进行可靠性评估,裕度系数定义为K=强化试验条件(XA)/正常使用条件(XB)。 相似文献
11.
In this paper, we present the concept of the logical entropy of order m, logical mutual information, and the logical entropy for information sources. We found upper and lower bounds for the logical entropy of a random variable by using convex functions. We show that the logical entropy of the joint distributions and is always less than the sum of the logical entropy of the variables and . We define the logical Shannon entropy and logical metric permutation entropy to an information system and examine the properties of this kind of entropy. Finally, we examine the amount of the logical metric entropy and permutation logical entropy for maps. 相似文献
12.
In order to deal with the new threat of low altitude slow small (LSS) targets in air defense operations and provide support for LSS target interception decision, we propose a simple and reliable LSS target threat assessment method. Based on the detection capability of LSS targets and their threat characteristics, this paper proposes a threat evaluation factor and threat degree quantization function in line with the characteristics of LSS targets. LSS targets not only have the same threat characteristics as traditional air targets but also have the unique characteristics of flexible mobility and dynamic mission planning. Therefore, we use analytic hierarchy process (AHP) and information entropy to determine the subjective and objective threat factor weights of LSS targets and use the optimization model to combine them to obtain more reliable evaluation weights. Finally, the effectiveness and credibility of the proposed method are verified by experimental simulation. 相似文献
13.
14.
15.
Mustapha Muhammad Huda M. Alshanbari Ayed R. A. Alanzi Lixia Liu Waqas Sami Christophe Chesneau Farrukh Jamal 《Entropy (Basel, Switzerland)》2021,23(11)
In this article, we propose the exponentiated sine-generated family of distributions. Some important properties are demonstrated, such as the series representation of the probability density function, quantile function, moments, stress-strength reliability, and Rényi entropy. A particular member, called the exponentiated sine Weibull distribution, is highlighted; we analyze its skewness and kurtosis, moments, quantile function, residual mean and reversed mean residual life functions, order statistics, and extreme value distributions. Maximum likelihood estimation and Bayes estimation under the square error loss function are considered. Simulation studies are used to assess the techniques, and their performance gives satisfactory results as discussed by the mean square error, confidence intervals, and coverage probabilities of the estimates. The stress-strength reliability parameter of the exponentiated sine Weibull model is derived and estimated by the maximum likelihood estimation method. Also, nonparametric bootstrap techniques are used to approximate the confidence interval of the reliability parameter. A simulation is conducted to examine the mean square error, standard deviations, confidence intervals, and coverage probabilities of the reliability parameter. Finally, three real applications of the exponentiated sine Weibull model are provided. One of them considers stress-strength data. 相似文献
16.
Chenguang Lu 《Entropy (Basel, Switzerland)》2021,23(8)
In the rate-distortion function and the Maximum Entropy (ME) method, Minimum Mutual Information (MMI) distributions and ME distributions are expressed by Bayes-like formulas, including Negative Exponential Functions (NEFs) and partition functions. Why do these non-probability functions exist in Bayes-like formulas? On the other hand, the rate-distortion function has three disadvantages: (1) the distortion function is subjectively defined; (2) the definition of the distortion function between instances and labels is often difficult; (3) it cannot be used for data compression according to the labels’ semantic meanings. The author has proposed using the semantic information G measure with both statistical probability and logical probability before. We can now explain NEFs as truth functions, partition functions as logical probabilities, Bayes-like formulas as semantic Bayes’ formulas, MMI as Semantic Mutual Information (SMI), and ME as extreme ME minus SMI. In overcoming the above disadvantages, this paper sets up the relationship between truth functions and distortion functions, obtains truth functions from samples by machine learning, and constructs constraint conditions with truth functions to extend rate-distortion functions. Two examples are used to help readers understand the MMI iteration and to support the theoretical results. Using truth functions and the semantic information G measure, we can combine machine learning and data compression, including semantic compression. We need further studies to explore general data compression and recovery, according to the semantic meaning. 相似文献
17.
Identifying influential nodes in complex networks has attracted the attention of many researchers in recent years. However, due to the high time complexity, methods based on global attributes have become unsuitable for large-scale complex networks. In addition, compared with methods considering only a single attribute, considering multiple attributes can enhance the performance of the method used. Therefore, this paper proposes a new multiple local attributes-weighted centrality (LWC) based on information entropy, combining degree and clustering coefficient; both one-step and two-step neighborhood information are considered for evaluating the influence of nodes and identifying influential nodes in complex networks. Firstly, the influence of a node in a complex network is divided into direct influence and indirect influence. The degree and clustering coefficient are selected as direct influence measures. Secondly, based on the two direct influence measures, we define two indirect influence measures: two-hop degree and two-hop clustering coefficient. Then, the information entropy is used to weight the above four influence measures, and the LWC of each node is obtained by calculating the weighted sum of these measures. Finally, all the nodes are ranked based on the value of the LWC, and the influential nodes can be identified. The proposed LWC method is applied to identify influential nodes in four real-world networks and is compared with five well-known methods. The experimental results demonstrate the good performance of the proposed method on discrimination capability and accuracy. 相似文献
18.
产品在研制阶段存在大量的试验数据,为有效利用验前数据,降低测试性验证试验样本量,提出一种基于验前试验信息熵的测试性验证试验方案。该方案利用信息熵来度量研制阶段多次验前试验数据对测试性验证试验所起的作用,依据平均互信息熵和信息总量相等的原则,将多次验前试验数据等效成一次成败型数据。在此基础上,通过相容性检验方法确定验前数据与试验数据的相容性水平,并以Beta分布为验前分布,利用加权混合贝叶斯理论建立混合验后分布,之后,基于贝叶斯平均风险理论求解满足双方风险要求的试验方案。最后,以某型雷达发射分机为例,对其进行测试性验证试验研究,研究结果验证了该方案的有效性。 相似文献
19.
We provide a stochastic extension of the Baez–Fritz–Leinster characterization of the Shannon information loss associated with a measure-preserving function. This recovers the conditional entropy and a closely related information-theoretic measure that we call conditional information loss. Although not functorial, these information measures are semi-functorial, a concept we introduce that is definable in any Markov category. We also introduce the notion of an entropic Bayes’ rule for information measures, and we provide a characterization of conditional entropy in terms of this rule. 相似文献