Monthly Archives: May 2017

br There are billions tons of waste acid liquid containing

There are billions tons of waste STI571 liquid containing arsenic produced by the factories in where arsenic-bearing minerals act as the raw materials all over the world annually. The existing methods for treating waste acid liquid containing arsenic fall into several categories: ion exchange, adsorption (activated alumina and activated carbon); ultrafiltration, reverse osmosis, and precipitation or adsorption by metals (predominately ferric chloride) followed by coagulation [1]; [3]; [4] ;  [5]. At present, one of the most widely used methods is vulcanizing agent leaching, in which arsenic from waste acid liquid reacts with the vulcanizing agent, such as sodium sulfide. This method is simple, low in cost and high in separation rate of arsenic (about 99.97%) [6]. However, significant quantities of arsenic sulfide byproduct, are produced through this process, which brings about a series of challenges for the disposal of them. Arsenic sulphide residue, which is a dreadful catastrophe to human health, may result in serious environmental pollution without reasonable disposal. On the other hand, as is well-known, arsenic sulfide has high application value in field of medicine [7]; [8]; [9]; [10] ;  [11] and the metallurgical industry [12]. For instance, arsenic sulfide has antitumor effects by the way of inhibiting the growth of K562-cells and SMMC-7721-cells. In addition to this, it is also a necessary raw material in the production of herbicides and wood preservative. Therefore, it is of great interest to turn the hazardous arsenic sulphide residue into good one. What\’s more, the recovery of valuable components from arsenic sulphide residue is one of way to enlarge utilization of arsenic and sulphide resources.

So far, the publicly reported researches about treating arsenic sulfide residue mainly focus on curing, landfill, or recycling in the form of arsenic trioxide, arsenate, arsenic trichloride and so on [1] ;  [13]. One of the most attractive options is a Japanese technology, which is detailed as follows: arsenic sulfide is repulped and heated to 70 °C in CuSO4 solution, under which arsenic sulfide reacts with the CuSO4 to form CuS. Meanwhile, the arsenic is dissolved as arsenious acid. Subsequently, As2O3 is precipitated from arsenious acid solution by cooling at 25 °C [14]. Obviously, the separation of solid–liquid is necessary for each steps, and the emission of arsenic trioxide in the process of crystallization should not be ignored either. As things now stand, the existing methods, including the Japanese technology, are less than satisfactory due to the drawbacks such as complex and time-consuming process flow, low recovery efficiency, high consumption in chemical reagents, high cost in operation. Above all, these processes are inevitably produce toxic gas and large quantities of waste acid liquid, which are potential to form secondary environmental pollution.

Enlightened by above thought, a method of vacuum separation, which shows many advantages such as simple technological flow sheet, low consumption of raw material and energy, no secondary off-gas or wastewater and so on [15]; [16] ;  [17], is proposed for treating arsenic sulfide residue in this work, with the purpose of recovering valuable components without negative impact on the environment. During vacuum separation, a method of three-step distillation was employed. Through three-step distillation, elemental sulphide/arsenic trioxide, arsenic sulfide, lead sulfide were evaporated into distillate in primary distillation, secondary distillation and third distillation, respectively. Yet the calcium fluoride is left behind as the third distilland.

2. Experimental section

2.1. Experimental materials

The arsenic sulphide residue used in the experiments was acquired from lead plant in which arsenic was recycled from waste water through sodium sulfide leaching. A small amount of arsenic trioxide was formed, due to oxidation of the waste residue after pile-up for a period of time. A homogenized sample was characterized by titrimetric analysis. Correspondingly, the contents of As, S, Pb and Ca are shown in Table 1.

br Table br The panel

Table 2.

The panel concluded that a PRO measure should be considered within the proposed context of use and implementation strategy. Therefore, information about these should be provided with candidate measures. It is acknowledged that not all the information specified in the table will be available or appropriate for all approaches, especially in these early days of developing and testing PRO performance measures. But the considerations should be addressed, and some may be tested within early implementation or pilot work.

The panel concluded that an actionable PRO measure is one that can identify patients for whom changes in care might be warranted and detects changes in outcomes after treatment. Demonstration of this capacity is advisable before widespread implementation. Because the use of PROs in performance evaluation has been uncommon in the past, it dhfr inhibitor is acknowledged that research in other related contexts may be relied upon initially for such evidence, such as use of a measure in clinical trials or comparative effectiveness research. When adapting an existing PROM for use in performance measurement (e.g., a measure initially developed for use in clinical research), it should be considered whether it 1) assesses outcomes that are meaningful in the population/context of interest and are actionable; 2) has been found understandable to patients in qualitative assessments; 3) demonstrates validity, reliability, and responsiveness; and 4) is feasible to implement (e.g., is not excessively lengthy, has available language translations if necessary, and can be administered electronically if necessary). Specific approaches for adapting PROMs into PRO-PMs have previously been described by the National Quality Forum (NQF), as well as details about the distinctions between PROs, PROMs, and PRO-PMs [50].

The goal of measuring PROs and rewarding performance on the basis of PROs is to encourage clinicians and organizations to adopt procedures that improve outcomes experienced by patients. Accurate measurement of PROs will be hindered if variation in performance among providers primarily reflects differences in the underlying populations served and adjustment methods are not used. Quantifying changes in scores over time compared with baseline, rather than relying on cross-sectional analyses, allows patients to serve as their own controls and accounts for baseline health status. It is acknowledged that risk-adjustment approaches may be refined over time in a particular program and that risk adjustment may be deemed unnecessary in some contexts (which should be demonstrated with empiric evidence). A practical example of how the best practices may be used to develop a measure is provided in Table 3.

Table 3.
Example of how the best practices can be used to develop a measure.A large network of community oncology practices is interested in understanding differences between practices in postchemotherapy nausea control as a potential quality measure. The rationale for this measure is published cross-sectional data about rates of nausea, and about variable compliance with published antiemetic guidelines (best practice 1). The purpose of the measure is to understand whether there are variable rates of nausea control between practices that might be improved through educational programs or feedback to providers (best practice 2). An existing nausea measure from the National Cancer Institute’s (NCI’s) PRO-CTCAE symptom library is selected on the basis of published qualitative and quantitative data in a similar community practice context, providing evidence that the measure evaluates a symptom that is important and meaningful to patients, with acceptable validity and reliability. Relevant permissions from the NCI are obtained (best practices 3 and 4). A multidisciplinary planning panel, including patient representatives, identifies the pertinent population as patients receiving moderately or highly emetogenic chemotherapy according to existing criteria (best practice 2). It is determined on the basis of a literature review to use an automated telephone system (IVRS) with live interviewer backup to assess symptoms at baseline (and to collect baseline data for risk adjustment), as well as daily following chemotherapy for 7 d (best practice 5). The proportion of patients experiencing moderate and severe nausea at more than one time point following chemotherapy at the practice level is chosen as the a priori primary end point, with risk adjustment for age, comorbidities, stage of cancer, number of prior lines of chemotherapy, time since diagnosis, baseline quality of life (via a PROMIS single measure), baseline nausea, and comorbidities (best practices 6 and 7). Exploratory analyses will be used to evaluate alternative end points in the analysis phase and refine the risk adjustment model. A sample size is determined by a biostatistician on the basis of published effect sizes for similar patients, accounting for anticipated missing data and attrition (best practice 6). Results will be fed back to practices with a planned onsite educational program offered to sites with lower performance (best practice 8). Follow-up assessment of practices following educational programs is planned to assess the impact of the programs, in addition to eliciting provider feedback about the program (best practice 9).IVRS, interactive voice response system; PRO-CTCAE, Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events; PROMIS, Patient-Reported Outcomes Measurement Information System.Full-size tableTable optionsView in workspaceDownload as CSV

br In order to derive the Helmholtz

In order to derive the Helmholtz free orexin Supplier of the crystalline material, we assume that the potential energy of the system composed of N atoms can be written asequation(4)U=N2∑iφi0( ri+ui )where riri is the equilibrium position of the i  -th atom, uiui its displacement, and φi0φi0 the effective interaction energy between zero-th and i  -th atoms. We expand the potential energy φi0( ri+ui )φi0( ri+ui ) in terms of the displacement up to the fourth-order terms and evaluate the vibrational coupling parameters.equation(5)U=∑i φi0( ri )+12∑α,β(∂2φi0∂uiα∂uiβ)equiαuiβ+16∑α,β,γ(∂3φi0∂uiα∂uiβ∂uiγ)equiαuiβuiγ+124∑α,β,γ,η(∂4φi0∂uiα∂uiβ∂uiγ∂uiη)equiαuiβuiγuiη+…

In order to derive the expression of the Helmholtz free energy, we need to evaluate analytically the following integralsequation(9)I1=∫0γ1〈ui4〉dγ1,I2=∫0γ2〈ui2〉γ1=02dγ2

The Helmholtz free energy ψ is going to be used to calculate the various thermodynamic quantities (such as the thermal expansion coefficient, and bulk modulus which are closely related to the anharmonicity of thermal lattice vibrations) as well as to investigate the fcc → bcc phase transition of the cerium element.

Let us now consider the isothermal compressibility of the solid phase with fcc structure. According to the definition of the isothermal compressibility χTχT, it is given in terms of the volume V and pressure P asequation(11)χT=−1V0(∂V∂P)T=1BT,where BTBT is the isothermal bulk modulus and the pressure P can be determined from the free energy ψ of the crystal byequation(12)P=−(∂ψ∂V)=−r13V(∂ψ∂r).

Using Eq. (9), we derive the equation-of-state (EOS) describing the pressure versus volume relation of the crystal lattice asequation(16)Pv=−r[16∂U0∂r+θX2k∂k∂r],where v   is the atomic volume v=V/Nv=V/N. By solving the EOS (12) we obtain the NND r1(P,T)r1(P,T) between two intermediate atoms at pressure P and temperature T.

Furthermore, we also derive the expression of the linear thermal expansion coefficient as followsequation(17)α=kBχT3(∂P∂θ)V=−2kBχT3r1213N∂2ψ∂θ∂r.

2.2. Structural phase transition

In this subsection, we will consider the structural phase transformation γ → δ (fcc → bcc) of cerium metal based on the thermodynamic analysis which proposes that, at the pressure P and temperature T   of the transition, the chemical potentials of the two phases are equal. Therefore, in order to investigate the phase transition of an element, we need firstly to calculate its Gibbs free energy G(P,T)=ψ+PVG(P,T)=ψ+PV because this will determine which phase of the system is stable at a given pressure. According to thermodynamic considerations, the phase with the lowest Gibbs free energy will be favorable at each temperature and each pressure. The difference of Gibbs energies of the two phases (fcc and bcc) at pressure P is given asequation(18)ΔG=Gfcc−Gbcc=Δψ+PΔVΔG=Gfcc−Gbcc=Δψ+PΔV

oxyntomodulin Supplier br AcknowledgmentsThe authors gratefully acknowledge

AcknowledgmentsThe authors gratefully acknowledge the support of the grant 50806082 and 51206189 from National Natural Science Foundation of China, and Shandong Provincial Natural Science Foundation, China (ZR2011EEQ020).

Resistive switching; HfO2:Cu film; Annealing process; SCLC

1. Introduction

RRAM has attracted lots of interest as the next generation non-volatile memory due to its potential merits, including excellent scalability potential, fast operation speed, and low power consumption [1] ;  [2]. The resistive switching (RS) effects can be realized in various materials [2]; [3] ;  [4], especially in binary transition metal oxides (TMOs) for simply controllable composition and good compatibility with complementary metal oxide semiconductor (CMOS) process. The RS behaviors in binary TMO, such as CuO [5], TiO2[6], and HfO2[7] ;  [8] have been deeply studied in terms of device performance and RS mechanism. Although switching mechanisms are still under debate, the filamentary model is identified as a oxyntomodulin Supplier one in binary TMO films. In practical applications, there are some open issues still remained to be solved. For example, how to improve the fluctuations of switching parameters and how to reduce the rather high operation current. Due to the random nature of electrical breakdown, the resistance states and related switching parameters usually vary in a large scale. Many methods have been proposed to modify the RS behaviors, such as inserting the buffer layer (metal or alloy) [9] ;  [10], annealing process [11] ;  [12] and doping technology [13] ;  [14] which is considered as one of the most effective ways to improving the RS performance. Cu is compatible with CMOS technology and owns good mobility and conductivity, being used as a doping material to improve the RS properties [3] ;  [4]. Meanwhile, the proper thermal annealing process can improve device performance by affecting the crystal structure and electrical properties of the films [11].

In this paper, the doping technology and the annealing process are combined together to enhance the RS behaviors of HfO2-based RRAM. The Cu/X/n+Si (X = HfO2, HfO2:Cu, AHfO2:Cu. AHfO2:Cu means the HfO2:Cu film with annealing process) structures were fabricated to investigate the RS characteristics. The results show that all devices show the bipolar RS behaviors and an improvement of RS behavior was obtained by HfO2:Cu film with annealing process.

2. Experiment

The Cu/X/n+Si (X = HfO2, HfO2:Cu, AHfO2:Cu) structures were fabricated. After standard cleaning of Si wafers, the 20 nm thick HfO2 film was grown on the Si wafer by RF magnetron sputtering at room temperature with a base vacuum of 7.8 × 10−5 Pa. The metal Hf target (99.995%) and highly pure O2 (99.999%) were used as the source of Hf and reactive gas respectively. During the deposition process, the deposition rate was 1 nm/min, the flow rate ratio of Ar:O2 was 12:3, the working pressure was 0.3 Pa, and the sputtering power was 80 W. Under the same deposition condition, HfO2:Cu film (doped with 6% Cu) was fabricated by loading the copper uniformly on metal Hf target. Then using heat pulse AG610 atm controllable rapid annealing furnace, HfO2:Cu film underwent the rapid annealing process of 200 °C for 10 min in N2 atmosphere, named AHfO2:Cu film. Finally, the Cu top electrodes were deposited on the film by evaporation using a shadow mask to pattern the size. The structure diagrams of three samples were shown in Fig. 1(a). The crystal structures of the films were characterized by X-ray diffraction (XRD) and Scanning electron microscope (SEM). The Cu element was verified to be present in HfO2:Cu film by X-ray photoelectron spectroscopy (XPS). The electrical testing was carried out by HP 4155C semiconductor characterization system at room temperature by applying the bias voltage to the top electrode while the bottom electrode was grounded.

br Nanoindentation was used to measure the

Nanoindentation was used to measure the hardness and the Young\’s modulus of the films which utilizes the Oliver and Pharr method [40] of calculations from the force displacement curve. Indentation load (P) and displacement (h) were continuously recorded during one complete PalMitoyl Tripeptide-1 manufacturer of loading and unloading. Hardness (H) of a material is given by Equation (3).equation(3)H=PmaxAwhere Pmax is the maximum load applied and A is the residual indentation area. The hardness is usually the mean contact pressure under the indenter [40]; [41] ;  [42]. The slope of the unloading curve gives stiffness of the contact (S) and is used to calculate the reduced modulus of elasticity (Er) (Equation (4)). Where A(hc) is the area of the indentation at the contact depth, hc (the depth of the residual indentation prior to elastic recovery of its shapes), and β is a geometrical constant on the order of unity. Relation between reduced elastic modulus (Er) and elastic modulus of specimen (Es) is given in Equation (5). Subscript i and s stands for indenter material, and sample materials and Poisson\’s ratio is denoted by ν.equation(4)Er=πSβ2A(hc)equation(5)1Er=1−νi2Ei+1&min###;νs2Es

The nanohardness measurements were carried out by means of a nanoindenter (CSM, Switzerland) equipped with a Berkovich diamond indenter with tip radius of 30 nm and at a load of 10 mN on the films of ∼0.5 μm thickness deposited on inconel-783 substrates. While measuring the nanomechanical properties, indentation depth was kept less than 1/10th of the film thickness in order to eliminate the substrate effect. The load and depth resolution of the instruments is 0.04 μN and 0.04 nm, respectively. A Poisson ratio 0.07, and elastic modulus of 1000 GPa were used for the diamond indenter to obtain the hardness and elastic modulus of the multilayers. The resistance to plastic deformation which is a qualitative measure of the toughness of the films was estimated from the empirical relation H3/E2H3/E2[42].

3. Results

Fig. 1. XRD pattern of a sintered CeO2 target with reflections corresponding to cubic structure.Figure optionsDownload full-size imageDownload high-quality image (134 K)Download as PowerPoint slide

Fig. 2. XRD pattern of a sintered Gd2O3 target showing monoclinic phase.Figure optionsDownload full-size imageDownload high-quality image (257 K)Download as PowerPoint slide

The XRD patterns of CeO2 and Gd2O3 films deposited at oxygen partial pressure 2 Pa and substrate temperature of 300 and 873 K are shown in Fig. 3 ;  Fig. 4 respectively. The CeO2 film deposited at 300 K is polycrystalline with FCC structure, whereas Gd2O3 film deposited at 300 K tends to become amorphous or X-ray amorphous (Fig. 3 (a)). The XRD pattern of Gd2O3 film deposited at 300 K shows a broad peak with crystallite size ∼1–2 nm which is not within the range of crystallite size that can be accurately calculated from XRD pattern. Except this broad peak at ∼30°, no other peak is observed in the XRD pattern. Hence Transformation of eukaryotic cells has been concluded that the Gd2O3 films is amorphous when deposited at 300 K. The Gd2O3 and CeO2 films deposited at 873 K are found to be polycrystalline with monoclinic and cubic structure respectively (Fig. 4 (a) and (b)).

br From the Formula it is evident that the

From the Formula (1), it is evident that the electrical field strength is stronger as the diameter of the tip is smaller, which is also proved in Fig. 7. So the electrical field strength between the electrodes with the diameter of 56.4 μm is stronger than the one with the diameter of 626 μm.

After the process of the negative corona discharge ionization, the main ions are the negative ions including NO2− andNO3−, which was also proved by the mass spectrometer experiments [14].

The negative ions generated near the needle (cathode) drift toward the cylinder (anode) while the positive ions stay close to the needle (cathode). In the drift region of the needle-to-cylinder electrode structure, the velocity of the negative insulin receptor is as follows:equation(3)v=kEv=kEwhere v is the velocity of the negative ion, K is the ion mobility, E is the electrical field. When the electrical field is large enough to make the velocity of the negative ion increase and the shield effect of the negative ion disappear, the Trichel pulse will become DC completely [15]. It is possible the reason why the needle with a micro tip can obtain the glow discharge but the needle with a larger tip can not. Therefore, the micro tip is very important for the realization of glow discharge.

On the other hand, the small sizes of the gap and the electrode can make neutral molecules leave the volume of the discharge before they get too ‘hot’ [10]. These conditions may be the reason why the glow discharge can be realized and kept stable without external airflow.

3.3. The effect of the cylinder surface on the discharge

Throughout the study in this part, the cylinder was turned around to study the effect of the cylinder surface on the discharge. Fig. 8 showed that some cylinder surfaces had black spots, while the others didn\’t have. The black spot was formed in the process of the discharge. As shown in Fig. 5 ;  Fig. 6, there exists the discharge flame on the cylinder surface in the process of discharge, just like roasting the cylinder surface over the ‘fire’. Then after many times of discharge experiments, the black spot was formed on the cylinder surface.

Fig. 8. The black spots on the cylinder surface with the cylinder turned around.Figure optionsDownload full-size imageDownload high-quality image (916 K)Download as PowerPoint slide

Under the same experimental condition, the discharge was carried out. In the process of the experiment, the cylinder was turned around to observe the discharge state. From Fig. 9, PCR can be seen that there are the glow discharge and the arc discharge with different cylinder surfaces. If there is a black spot on the cylinder surface, the glow discharge can be realized, as shown in Fig. 9(a), (c), (d) and (h). On the other hand, the arc discharge will happen on condition that there is no black spot on the cylinder surface, as seen in Fig. 9(b), (e), (f) and (g).

HG-9-91-01 br Finally in our study a positive recommendation

Finally, in our study, a positive recommendation was defined as a product being accepted for use with or without any restrictions. For this study, we did not pursue to separate the full acceptance for use from an acceptance for restricted use because the total number of submissions per level of recommendation would have been too low to draw robust conclusions. In addition, it HG-9-91-01 is unknown whether it was the manufacturer’s strategic decision to request reimbursement recommendation with restrictions, or it was the SMC who came to this conclusion given the submitted evidence. Therefore, it cannot be assumed that a restricted acceptance represents a lesser favorable decision because this may serve, for example, the therapeutic indication of the product of interest. Nonetheless, it is acknowledged that future research could take into account the multinomial nature of the recommendation outcome.

The SMC database that was created for this analysis is assessed to be comprehensive, including information from all different components of submitted evidence. In addition, the sample for this analysis is the biggest that has been created for this type of analysis and it is the only one related to the SMC coverage recommendations. Future research could include a consistent assessment of cases and explanatory variables and methodologies across countries to better understand and explain differences across different health care systems.

To conclude, the present study identified the most influential factors to the reimbursement recommendation by the SMC. It was shown that favorable ICER (i.e., base-case ICER and sensitivity analysis around it below £30,000/QALY gained) is crucial for a successful submission. Both univariate and multivariate analyses showed that this comes in combination with the clinical evidence, the target disease, and the company’s size, which also play a significant role in SMC reimbursement decisions. It is interesting to observe that these conclusions are in line with the publicly stated objective of the SMC: “Will the medicine be effective? Are current treatments better? Does the medicine give value for money compared to existing treatments? These are the main questions asked while considering new medicines’ approval” [15].

clinical trials; minimal clinically important difference; quality of life; sample size; urinary incontinence


Sample size calculations for clinical trials aim to ensure that each trial will have sufficient statistical power to detect a predetermined difference in the outcome of interest between the new intervention and the comparator [1]. Participants in each trial arm usually achieve some degree of clinically important change, rendering it necessary to quantify the magnitude of effect due to the intervention alone. A minimal clinically important difference (MCID) in survival or symptoms is therefore defined in advance for each participant, and rates of attaining this MCID are compared between groups in the final analysis [1]; [2] ;  [3]. The MCID has been well established as the principal way of giving clinical relevance to changes in standardized measures, representing the smallest difference in a score in the domain of interest that patients perceive as beneficial in the management of their condition [3]. A critical component of sample size calculations for health-related quality-of-life HG-9-91-01 (HRQOL) outcomes is anticipating the likelihood that participants in each trial arm will reach their desired MCID in HRQOL.

The National Institute for Health and Care Excellence recommends using a generic preference-based instrument to assess gains in trial-based quality of life to quantify an impact on HRQOL that can be converted into quality-adjusted life-years (QALYs) for evaluation of cost effectiveness, and compared across the spectrum of disease [4]. Rates of attaining the MCID for a specific disease, as defined and measured on a disease-specific HRQOL measure, are often attenuated when generic HRQOL instruments are used to assess improvement, yielding vastly different effect sizes by type of instrument and disease category [5]. Health conditions that preferentially affect domains of mental health over physical health comprise a disease category for which effect sizes are more difficult to measure on generic HRQOL instruments [6]. Instruments such as the three-level EuroQol five-dimensional questionnaire (EQ-5D-3L) and the short-form 36 health survey (SF-36) miss important aspects of the impact mental health issues can have on the quality of people’s lives [6]. Mental health effects on self-perception, ill-being, and activity are insufficiently detected by the EQ-5D-3L and the SF-36 for persons with anxiety disorders, bipolar disorders, and schizophrenia [6]. Similar shortcomings of generic preference-based HRQOL measures apply to persons suffering from urinary incontinence [7]; [8] ;  [9]. The quality-of-life domains affected by the experience of involuntary urine loss include social embarrassment, avoidance behavior, psychosocial impact, and sleep, which have limited representation in generic preference-based HRQOL measures [8]; [9]; [10] ;  [11]. In individuals with mild symptoms of incontinence, the EQ-5D-3L demonstrates a pronounced ceiling effect and skewed response distribution and has been shown to be relatively insensitive to changes in incontinence severity [7] ;  [8].

br Like Hill et al

Like Hill et al, we found a large numbers of problems with the clinical evidence in submissions to the PBAC. The overall rate of significant problems in major submissions to the PBAC does not appear to have changed since the mid-1990s. We observed more problems with the analysis of the interpretation of the clinical evidence and fewer problems with the determination of therapeutic noninferiority than did Hill et al. This could be explained by a higher proportion of submissions for medicines with a claim of clinical superiority in our study. Because Hill et al. did not present any data to enable such a comparison, we are unable to determine this.

The findings from our study have international implications because it is likely that HTA agencies in other cou### such as the National Institute for Health and Care Excellence have encountered similar types of problems in their assessment of the available clinical evidence for reimbursement/coverage determinations. A common denominator of the different publicly and privately funded health care technology reimbursement systems is the strength and relevance of comparative “clinical evidence.” This holds true irrespective of whether or not the agencies require the evidence to be presented to them in submissions from the developers of the medicines concerned because the issue is more about the underlying clinical evidence rather than the failure of the developers to identify and then present it.

We are not aware of similar research being conducted in other jurisdictions, so we cannot determine whether our findings of the clinical evidence being a poor fit for purpose are confined to Australia. We note the recent studies by Kaltenthaler et al. [9] ;  [10] on the identification of issues associated with the first 95 single technology assessments undertaken by the National Institute for Health and Care Excellence and a review of the evidence to inform the Ro 31-8220 methanesulfonate of cost-effectiveness models within HTAs. In neither study did they undertake an assessment of the quality of the supporting clinical evidence.

Problems with the choice of comparator are not unique to the PBAC; they have occurred with submissions to HTA agencies in other jurisdictions such as the Institute for Quality and Efficiency in Health Care in Germany and the Canadian Drug Expert Committee in Canada [11]; [12] ;  [13]. It is unclear as to whether choice of comparator issues are more or less common in Australia.

In the absence of empirical evidence on the quality of clinical evidence considered by other HTA agencies, the extent of problems in other jurisdictions is unknown.

The evaluation framework we developed could be used to conduct such research that could be performed by “independent” researchers if there is considerable information on the assessments and determinations in the public domain. Should such research be conducted and derive similar findings to ours, binary fission will raise important issues regarding what can and should be done by all stakeholders to improve the quality of the clinical evidence used to support the reimbursement/coverage of new medicines.

AcknowledgmentsWe thank Sue Hill and Robyn Ward for their helpful comments on reviewing the manuscript.Source of financial support: The authors have no financial relationships to disclose.

Belgium; coverage decisions; patient participation; policy; public involvement


Involvement of the general public and patients in the resource allocation decision-making process is a way to incorporate societal values in decisions. Involvement is “the spectrum of processes and activities that bring the public into the decision-making process” and is associated with activities beyond routine democratic processes [1]. Public and patient involvement (PPI) in health care decision making helps in legitimating decisions [2]; [3] ;  [4] and in dealing with societal and economic evolutions, such as increasing demand for health care and higher patient expectations in a context of budgetary constraints [5]. Moreover, it could engender the trust and confidence in the health system [6] and engage communities and individuals in health action [3] ;  [7].

br Identification of Average Treatment

Identification of Average Treatment Effects

We start our discussion on model identification by defining the following treatment and control groups.

D1.Treated group of smokers who quit smoking (TGQ): individuals who were smokers in 2004 and became nonsmokers in 2006.

D2.Control group of smokers (CGS): individuals who were smokers in 2004 and remained so in 2006.

In our model, quitters (i.e., treated individuals) were defined by a question asked in the BHPS about smoking status. The specific question is posed as “Do you smoke cigarettes?” and was recorded as a dichotomous variable. (The question posed in this way represents a limitation for our study, because it adenosine receptor antagonist does not provide information about the frequency of smoking. Smoking and quitting smoking are chaotic unstable processes, and many attempts at quitting, e.g., are unplanned or spontaneous and many fail almost immediately, as suggested by West [22].) The BHPS also contains another question measuring smoking habits, which provides information about how many cigarettes the respondent usually smokes per day. In this article, we decided to use only the first question, about smoking participation, because we were more interested in the effects of quitting, rather than of smoking reduction. In order not to include subjects with unusual smoking behaviors in treatment or control groups, we used BHPS waves from 1999 to 2003 to reconstruct individual smoking histories. We then excluded from the sample those individuals who stated they were nonsmokers between 2004 and 2006, but who had been smokers between 1999 and 2003, and also those who stated they had been smokers between 1999 and 2003 but who were nonsmokers between 2004 and 2006.

Our evaluation strategy assumes that weight variations between 2004 and 2006 for TGQ individuals were affected by quitting smoking and by a spontaneous dynamic (i.e., time-specific component), whereas individuals who continued to smoke (CGS) were affected only by the spontaneous dynamic.

According to Cawley et al. [23], however, smoking habits are influenced by body weight if smokers do not quit because minerals are afraid of putting on weight. In this case, estimates of the relationship between smoking and body weight are biased by reverse causality. In addition, comparing TGQ with CGS may produce biased estimates because the estimated weight variation for the CGS is biased downward and consequently the estimated ATEs are biased upward.

We evaluated the magnitude of this effect by defining two control groups. The first also comprised nonsmokers in the smokers’ control group (from this control group we excluded nonsmokers who started smoking in 2006, irrespective of whether they were or were not ex-smokers in 2004. We anticipate that the dimension within our sample [2.04%] is negligible for our estimates), that is, control group of smokers and nonsmokers (CGALL), whereas the second comprised only nonsmokers, that is, control group of nonsmokers (CGNS). Both groups were composed of individuals who kept their cigarette consumption stable and who, in principle, were not affected by reverse causality, like the CGS, because their weight did not affect their smoking decisions. We formally defined:

D3.CGALL: individuals who were smokers or nonsmokers in 2004 and remained so in 2006.

Because treatment was not randomly assigned in our data set, treated and control groups also differed according to time-varying unobserved factors related to smoking and weight decisions. In this case, the presence of unobserved heterogeneity may have caused our baseline estimates, comparing treatment groups (i.e., TGQ) with CGS, to be biased downward. In fact, quitters turned out to be generally more concerned about their health and more oriented toward the future, as discussed by McCaul et al. [24], and their decision to quit smoking may thus be seen as part of a more general attitude aimed at improving health—for example, also by reducing weight in obese people. The presence of these individuals in TGQ may have biased downward not only the estimated weight variation but also the estimated ATEs.

br Supplemental Materials Supplementary MaterialHelp with DOC filesOptionsDownload file

Supplemental Materials
Supplementary MaterialHelp with DOC filesOptionsDownload file (22 K)

antiretroviral therapy; coverage; HIV treatment; resource-limited; simulation


The development of highly active antiretroviral therapy (ART) has revolutionized the treatment of HIV disease, producing dramatic increases in survival [1]; [2] ;  [3]. The benefits of these therapies, however, have not been fully realized in many resource-limited environments. The lack of sufficient treatment has been especially severe in sub-Saharan Africa, where many countries are able to provide treatment to only a small portion of the HIV-infected reductase enzyme [4]. Recent recommendations that support a “test-and-treat” strategy, with treatment being recommended for all HIV-infected individuals regardless of CD4 count, will exacerbate the problem of insufficient treatment resources.

Over the past decade, many sub-Saharan African nations, in cooperation with developed nations, the pharmaceutical industry, the World Health Organization (WHO), and many private charities have increased the resources available to treat the HIV epidemic. A measure of the success of these efforts is the increase in “coverage”: the proportion of HIV-infected people meeting criteria for treatment who are being treated. In 2003, the average coverage levels in sub-Saharan Africa were only 3%, which had increased to 17% by 2005 [5], which still left large portions of the population untreated. In just a few years, international efforts have increased coverage rates substantially, and now most of the persons in sub-Saharan Africa live in countries with between 40% and 60% coverage [4]. The effects of increasing treatment resources on the epidemic are complex: on the one hand, HIV-infected individuals on treatment live substantially longer than do those not on therapy; on the other hand, HIV-infected individuals on treatment have a lower viral load (VL) and are less likely to transmit the disease. Also, treatment can induce mutations, which may decrease the effectiveness of treatment, and increase the HIV-infected individuals’ VL. Therefore, the standard Joint United Nations Programme on HIV and AIDS (UNAIDS) “snapshot” definition of coverage, which we name prevalence-based coverage, may fall short in measuring the performance of ART programs. For example, Johnson and Boulle [6] note that as ART programs mature, the prevalence-based coverage becomes less sensitive to annual changes in ART enrolment and consequently it says relatively little about the recent performance. Moreover, the prevalence-based coverage is very sensitive to the treatment eligibility criteria and it will decline if the current recommendations for treating at a CD4 count of less than 500 cells/mm3 are used to determine the treatment-eligible population [7]. Johnson and Boulle [6] also propose the “enrolment ratio,” the fraction of ART initiation to HIV disease progression, as an alternative measure to complement the prevalence-based coverage.

In this study, we propose a new definition for coverage, which we name cumulative incidence–based coverage, and show that it may be a better representation of the long-run performance of ART programs than is the conventional prevalence-based coverage. In particular, unlike the prevalence-based coverage, which improves by deaths among HIV-infected individuals not on treatment, the cumulative incidence–based coverage is less sensitive to the rates of mortality, CD4 count decline in untreated individuals, and ART eligibility criteria. To compare the estimates of the prevalence-based and cumulative incidence–based coverage in a resource-limited setting, in which the effects of ART expansion on the size of the HIV-infected population who qualify for treatment are complex, we extend an individual HIV progression model and incorporate viral transmission. We also investigate the effects of various coverage and eligibility decisions on the HIV-infected population and required ART resources.