Monthly Archives: January 2017

The third example from Site PTPIC Montanha do Pico Prainha

The third example, from Site PTPIC0009 (Montanha do Pico, Prainha e Caveiro), in Pico Island (Azores) (see Supplementary Data, file PTPIC0009.kmz), illustrates a different situation with an intermediate number of habitat types. The major contributions to the SRI value of this Site are from areas of habitat types 4050 (*Endemic macaronesian heaths), 6180 (Macaronesian mesophile grasslands) and 9360 (*Macaronesian laurel forests – Laurus, Ocotea) which have high values of Rarity but low Representativeness. Due to this fact the Relevance Index of this Site is lower than those of the two previous examples.
These three examples, replicable in any Natura 2000 Site in Europe, demonstrate how the proposed SRI evaluation assists in identifying the most valuable areas in each Site. However, equivalent approaches may be used in regions, countries or bioregions, habitat types or groups of habitat types.
In the Carfilzomib file (Supplementary Data, Natura2000_Portugal_zij.kmz) we present the Relevance Indices for all Natura 2000 Sites in Portuguese territory.
3.3. Relevance Indices applicability
A Relevance Index for a habitat area should include the rarity of that habitat type but also its representativeness within the whole Natura 2000 Network. In fact larger areas of a given habitat type contribute more to the conservation of the whole habitat type than smaller ones, no matter their rarity or commonness (Meffe and Carroll, 1997, Rosenzweig, 1995 and Battisti and Fanelli, 2015). Also, as pointed out by MacArthur and Wilson (1967), species diversity strongly correlates with the size of habitat patches. Therefore, different areas of a given habitat type may have different conservation relevance since they represent different proportions of the total extent of that habitat type in the Natura 2000 Network. Obviously this is a first level approach as, for instance, Martins et al. (2014) discussing species-area models in Iberian Peninsula concluded that integrating land-use variables in the models conceived to assess species richness response was found to be significant, thus enabling multi-habitat species-area modeling.
The representativeness of a given area for the whole habitat type is equivalent to the statistical representativeness of a sample in a population. In a simple random sampling scheme each member of the statistical population has an equal probability of being selected (Cochran, 1977 and Roleèek et al., 2007). This was the basis for the concept of representativeness used in this study. Of course this simplification implies that all hectares of the same habitat type are considered equally, which we know that is not absolutely true due to genetic variations and several other factors that are not possible to integrate in a general analysis at this scale.
The similarities of the proposed Relevance Indices to other approaches using the concepts of rarity and diversity in ecology and conservation (e.g. Samu et al., 2008, Geneletti, 2003 and Lennon et al., 2003) indicate that our approach can have the adequate properties already identified to be used as indicators for conservation. This is also in line with the work of Haddock et al. (2007) who developed a methodology for evaluating landscape management scenarios where prioritization of habitats was identified with consideration to its scarcity. In general, conservation prioritization or “priority-setting” usually includes an assessment of extinction risk, but will also integrate other ecological and socioeconomic criteria such as regional responsibility, habitat vulnerability, cultural preferences, likelihood of conservation action success, legal frameworks, and funds availability (Kricsfalusy and Trevisan, 2014).

In addition CO was found to affect the seasonal and

In addition, CO2 was found to affect the seasonal and interannual variation of instaneous GPP (Norby et al., 2005). Therefore, we introduced annual mean CO2 mass concentration (ρcyr) as another climatic variable. Given that no ρcyr was directly reported at most sites, we calculated ρcyr based on the CO2 GDC-0199 fraction (bc) from Mauna Loa ( Keeling et al., 1976 and Thoning et al., 1989), CO2 mole mass (Mc, 44 g mol?1), and mole volume at the current state (V1) as:
equation(1)
View the MathML sourceρcyr=bc×McV1
Where V1 can be calculated based on the ideal gas state equation as:
equation(2)
According to the pressure-height formula, we calculated P1 from altitude (Alt, with the unit of m) and MAT (with the unit of °C) as:
equation(3)
In addition, if the site had multiyear observations, we calculated the mean GPPyr and climatic variables among the measuring period, which may exclude the effect of inter-annual variation.
2.3. Leaf area index data processing
At each site, we extracted LAI data with 8-day temporal resolution from the global land surface satellite dataset (Liang et al., 2013) and calculated the annual mean LAI (LAIyr) for the year that GPPyr was observed as:
equation(4)
View the MathML sourceLAIyr=146∑i=146LAIi
where LAIi is the 8-day LAI values.
If the site had multiyear observations, we also used the mean LAIyr for the measuring period to represent its biotic factor.
2.4. RUE calculation
According to the radiation use efficiency theory, GPPyr is the product of RUE, FPARyr, and PARyr. FPARyr can be calculated from LAIyr based on Beer-lambert law as:
equation(5)
where k is the extinction coefficient, which is set to 0.5 according to Yuan et al. (2010). Therefore, RUE (gC MJ?1) was calculated as
equation(6)
View the MathML sourceRUE=GPPyrFPARyr×PARyr
2.5. Statistical analyses
Under Matlab 7.7 (Math Works Inc., Natick, MA, USA), we employed the linear regression to separately analyze the effects of various factors such as MAT, MAP, PARyr, ρcyr, and LAIyr on the spatial variations of RUE and FPARyr, respectively. Based on the significant factors, the stepwise regression was used to build a multivariable regression. Path analysis was then explored to distinguish the direct factors affecting the spatial variation of RUE.
3. Results
3.1. Factors affecting the spatial variation of RUE
Many factors were found to significantly affect the spatial variation of RUE but their effects distinctly differed (Fig. 2). MAT, whose increase significantly raised RUE, inserted the strongest effect on the spatial variation of RUE, with an R2 of 0.35 and an RMSE of 0.69 gC MJ?1 (Fig. 2a), while MAP played the weakest role in the spatial variation of RUE, only 10% of which was explained (Fig. 2b). PARyr exhibited a negative effect on the spatial variation of RUE, with an R2 of 0.20 and an RMSE of 0.77 gC MJ?1 (Fig. 2c), whereas the increasing ρcyr significantly prompted RUE, with an R2 of 0.23 and an RMSE of 0.75 gC MJ?1 (Fig. 2d). However, there was no significant correlation between LAIyr and RUE (data were not shown).

While there are a wide spectrum of

While there are a wide spectrum of organic and inorganic contaminants in the stormwater, contamination with pathogenic microorganisms such as viruses and bacteria pose the greatest risk to human health. Faecal waste from humans and animals potentially contain high concentrations of pathogenic microorganisms. A variety of indicators have been used to assess faecal contamination of water, including Escherichia coli (E. coli), faecal coliform, and enterococcus ( WHO, 2001 and NHMRC, 2011).
Mechanisms that influence bacteria transport and removal in aquifers include attachment to and detachment from the sediments, straining, and inactivation in the aqueous and solid phases (Bradford et al., 2013). Changes in the flow rate, aqueous (pH, ionic strength and composition) and solid phase chemistry are known to strongly influence these transport and removal processes (Bradford et al., 2015). Although most studies investigating the processes of bacteria fate in porous media have been performed in the laboratory, advances in the understanding of bacteria transport and removal would clearly also benefit from field investigations. There are many factors, in addition to the physicochemical characteristics of the aquifer, which control the removal and transport of microorganisms in groundwater. For bacteria, these biotic and abiotic factors include growth, predation by protists, possible purchase azidothymidine by bacteriophage, motility, lysis under unfavourable conditions, changes in cell size and propensity for attachment to solid surfaces in response to alterations in nutrient conditions, reversible and irreversible attachment to solid surfaces (Bradford et al., 2014). Many aspects of bacteria removal are best investigated in the field since packed-column experiments in the laboratory cannot produce actual structure and heterogeneities of aquifer sediments. Several field studies have reported the potential removal of organic chemicals (Pavelic et al., 2006a, Ying et al., 2008 and Ernst et al., 2012), nutrients (Vanderzalm et al., 2013), and pathogens (Sidhu et al., 2010, Toze et al., 2010 and Tandoi et al., 2012) during ASR storage. A review of water quality improvement processes for stormwater and recycled water in aquifers are given in Dillon and Toze (2005) and Vanderzalm et al. (2006).
The fate and removal of bacteria and the factors influencing their presence in groundwater are important issues purchase azidothymidine to be considered when reusing harvested stormwater. The main objective of Termination codon study was to assess the efficacy of ASR for removing E. coli and to demonstrate that aquifer storage would offer considerable protection for public health and groundwater quality. Urban stormwater quality was examined during passage through four different ASR systems after wetland treatment in order to understand and characterise the changes in water quality and variability, particularly for E. coli and turbidity.
2. Materials and methods

An Arrhenius analysis was performed on the soil dissolution data

An Arrhenius analysis was performed on the soil dissolution data as described by Mercadé-Prieto and Chen (2006). This gave an activation tankyrase (Ea) of approximately 4.8 kJ/mol, slightly lower but in the same order of magnitude than for a non-catalysed breakage of peptide bonds (8-10 kJ/mol) (Martin, 1998). This enhances the hypothesis already given of a reaction rate limiting stage. Despite only three points were analysed (three temperatures considered), a linear correlation was seen when representing removal rates (ln SD) as a function of temperature (1/T). The coefficient of determination estimated was high (R2 = 0.999). This suggests that the rate limiting stage is the same independently of temperature. A deeper analysis on this phenomenon will require the study of the dissolution behaviour at different pH, enzyme levels or sample processing conditions.
The empirical approach followed in this section was used to calculate removal rates. Different cleaning mechanisms can be modelled statistically as a function of the parameters controlled. The examples given have shown the effect of temperature and frequency of application of shear stress. Two phases of the cleaning process are distinguished: an initial stage with no removal defined by a lag time and a subsequent constant removal phase. To compensate for the sudden increase in removal rates that this approach would produce, a transition period was also defined. A linear increase in the removal rate was established after the initial lag time for an extra half of the lag time estimated. The use of this transition period is used to provide smoother simulated curves around the curve maximum. An opportunity for further development is possible. The application of a stronger theoretical background to characterise enzyme kinetics (i.e. Michaelis-Menten approach) would lead to a removal model that would be less empirical.
Concerns about a change over time (decrease) of the N parameter (the effective number tankyrase of polymer chains per unit volume of the polymer) can also arise when analysing the definition of the parameter in detail. Hydrolysis reactions breaking protein network bonds reduce the number of crosslinks and long chains. Therefore, the value of N reduces. However, removal occurs from top to bottom layers and experimental data suggests an enzyme reaction rate limiting stage. This indicates that only top layers are affected by the enzymatic action at a given time. Those layers are removed as soon as an external mechanical action is applied or through a dissolution process. Deeper layers remain unreacted and therefore with the same ‘N’; value initially established. Thus, the integrated swelling process can still be applied without any changes over time. For further developments, the decrease of N in top layers could be used as limit criteria to establish the point from which removal is going to occur.
A computer routine developed in MATLAB™ allowed the integration of the different mechanisms studied. In Fig. 9, a schematic of the algorithm used is shown:

Various types of hydrophilic non

Various types of or non-porous magnetic microspheres with different content of carboxyl groups were used for bacterial DNA isolation from real food samples (dairy products). According to the previously published results (Rittich et al., 2009), the highest DNA yield was achieved using 16% PEG 6000 and 2.0 M NaCl. In real samples, target DNA is usually present in lower concentrations. Random coiled DNAs change into compact globules in diluted DNA solutions containing PEG and NaCl (when PEG and/or salt concentrations exceed their critical concentration); on the other hand, DNA molecules aggregate (Kojima et al., 2006) in concentrated DNA solutions. Therefore, DNA isolation from real samples could proceed under conditions different from those in model experiments.
Spectrophotometrically determined concentrations of DNA isolated from different dairy products using the magnetic microspheres tested are summarised in Table 3. The highest DNA amount was isolated using P(HEMA-co-GMA) (A) microspheres with the highest concentration of COOH groups – 2.61 mM g?1 (see Table 1). The differences among magnetic particles reflect the influence of not only a different coverage of the microsphere surfaces by carboxyl groups but also the different morphology of the microspheres: PGMA microspheres have a tendency to agglomerate in water solutions (Horák et al., 2005). The variability of DNA amount isolated by the phenol extraction procedure can be influenced by imperfect phase separation.
In the evaluation of DNA amounts isolated by the magnetic microspheres tested, it is necessary to take into account the fact that the concentration of nucleic acids (DNA and RNA) was determined by UV spectrophotometry. RNA is adsorbed more strongly on the microspheres’; surface and thus its eluted amount is lower (see Table 2). DNA recovery was comparable with the previously published results with P(HEMA-co-GMA) microspheres (Trachtová et al., 2012). The amount of eluted DNA was sufficient for PCR amplification of target DNA. The advantage of magnetic microspheres is the possibility of their application using variously sized volumes of initial samples (Trachtová et al., 2012), and therefore the possibility of optimisation of the DNA isolation procedure.
The food products represent a matrix characterised by a complex composition containing inhibitors that affect the course of PCR. The quality of DNA isolated from food products was tested using a real-time PCR according to the described procedure (Trachtová et al., 2011). The standard curve was linear in an interval from 5 pg μL?1 to 50 ng μL?1 DNA (R2 = 0.9909, M = ?3.28), where R2 is the correlation coefficient of the linear regression and M is the plot Ct versus log of nucleic acid concentration. A simple method to process real-time PCR data is based on Ct values. The Ct value of each sample is proportional to the log of the initial DNA concentration ( Higuchi et al., 1993). The results for two dairy products are given in Table 4. From the results given in Table 4 it follows that DNA isolated using the phenol extraction procedure exhibited lower Ct values than in the case of DNA isolated by the magnetic particles tested. Therefore, DNA isolated by the magnetic particles tested contained lower amounts of PCR inhibitors than DNA isolated by the phenol extraction procedure. Only one PCR product was amplified in a real-time PCR using DNA isolated from all dairy products. This was demonstrated using the melting analysis of the amplification curves (the results are not shown). It is apparent that the real-time PCR can be used to study both the presence of PCR inhibitors in the DNA samples and the influence of magnetic particles on the PCR course.

We now discuss several external evaluation strategies proposed for

We now discuss several external evaluation strategies proposed for the co-clustering scenario. We describe the micro-objects orexin antagonist and the measures that have been adapted to this approach, namely CE, RNIA, Rand’;s index, VI and E4SC. Patrikainen and Meil? [25] propose to transform the candidate co-clustering and the gold standard into traditional clusterings in order to apply existing evaluation measures for the latter. They do so by transforming (O, F) into the new space (O × F, ?), where O × F is composed by pairs of the form (o, f), o ∈ O, f ∈ F, which they call micro-objects.
Definition 1.
Let G¨= (Gˉ1,G?1),(Gˉ2,G?2),…,(Gˉt,G?t) be a co-clustering. The micro-objects transformation of G¨ is the clustering G˜= G˜1,G˜2,…,G˜t where G˜i=Gˉi×G?i for every i∈ 1,…,t i∈ 1,…,t .
Patrikainen and Meila’;s evaluation framework consists on applying the micro-objects transformation in combination with the measures Clustering Error (CE), Relative Non-intersecting Area (RNIA), Rand’;s index and Variation of information (VI). CE determines the best matching between the candidate clustering and the gold standard, and computes the total number of objects shared by every class-cluster pair, according to this matching, which is denoted by Dmax. CE is defined as
equation(1)CE(G,C)= U ?Dmax U where U=(∪G∈GG)?(∪C∈CC). Now, let I=(∪G∈GG)?(∪C∈CC). RNIA is defined as
equation(2)RNIA(G,C)= U ? I U
The traditional Rand’;s index assumes the candidate clustering and the gold standard to be partitions of the object universe and is defined as
equation(3)Rand(G,C)=N00+N11Nwhere N11 is the number of object pairs that co-occur in a cluster of GG and co-occur in a class of C,C,N00 is the number of object pairs that do not co-occur in a cluster of GG and do not co-occur in a class of C,C, and N is the total number of object pairs. Patrikainen and Meila count N on the universe U=(∪G∈GG)∪(∪C∈CC)U=(∪G∈GG)∪(∪C∈CC) and, to make the candidate clustering and the gold standard be partitions of U, they add as many singleton clusters as necessary.
Finally, VI is based on information theory and assesses the amount of information gained and lost when transforming the candidate clustering into the gold standard, as follows:
equation(4)VI(G,C)=1 U ∑i=1t1∑j=1t2 Gi∩Cj log Gi · Cj Gi∩Cj 2where t1= G t1= G and t2= C t2= C . In a manner analogous as for Rand’;s index, they transform the candidate clustering and the gold standard into partitions of U by adding as many singleton clusters as necessary.
Example 2 is designed to test the compliance of the evaluation measures to the rag bag (A.3) condition. For the sake of uniformity, in all cases we consider that a co-clustering G¨1 being scored worse than a co-clustering G¨2 by a measure f means that f(G¨1,C¨)<f(G¨2,C¨),i.e. we view the scores yielded by evaluation measures as similarity values. Since CE and RNIA are defined as dissimilarities in the range [0, 1], in both cases we transform the scores into similarity values by making fsim(G¨,C¨)=1?fdissim(G¨,C¨).

Only the correct RT was significantly negatively correlated

Only the correct RT was significantly negatively correlated with fixation proportion on the nose (r = ?.322, p = .043) of the 100% Chinese faces during recognition of the target faces, but not with that of the 100% Caucasian faces (r = ?.084, p = .605). Thus, the more the participants fixated on the nose of the 100% target Chinese faces during the test period, the faster they treat correctly recognized the target face during test. There were not any other significant correlations, all other ps > .05. The results are presented in Fig. 5 (also see Table 1).
Fig. 5. Correlations between nose scanning and face recognition performance (correct RT) when recognizing the target faces (each data point refers to an observer’;s result).Figure optionsDownload full-size imageDownload high-quality image (116 K)Download as PowerPoint slide
3.2.3. Fixation proportion on the eyes, nose, and mouth
3.2.4. Correlation between correct RTs and fixation proportions on the AOIs
We also conducted a series of Pearson correlation analyses between the correct RTs and fixation proportions on each AOI of the 50% Chinese-50% Caucasian faces during the encoding and test periods.
Only the correct RT was significantly negatively correlated with fixation proportion on the nose of 50% Chinese-50% Caucasian faces in the Chinese condition (r = ?.328, p = .039) during recognition of the target faces, but not with that of the nose of the same faces in the Caucasian condition (r = ?.202, p = .211). Thus, the more the participants fixated on the nose of the 50% Chinese-50% Caucasian target faces in the Chinese condition during the test period, the faster they correctly recognized these faces during test. There were not any other significant correlations, all other ps > .05. The results are presented in Fig. 5 (also see Table 1).
3.2.6. Scanning differences between 50% and 100% faces in the Caucasian condition
4. Discussion
Consistent with Fu et al., 2012 and Hu et al., 2014, we replicated the results that Chinese participants spent a significantly greater proportion of fixation time on the eyes of other-race Caucasian faces than on the eyes of own-race Chinese faces. In contrast, they spent a significantly greater proportion of fixation time on the nose of Chinese faces than on the nose of Caucasian faces. Furthermore, we found that race-specific face scanning was related to face recognition performance: when recognizing the true Chinese target faces, the greater the fixation proportion on the nose of the Chinese faces, the faster the faces were correctly recognized.
It should be noted, however, that the benefit of specific scanning patterns might be limited to recognition of faces from the more familiar own-race category. This conclusion is supported by the following evidence: first, although participants scanned more on the eyes of true Caucasian faces, their scanning of the eyes of the Caucasian faces was not significantly correlated with their face recognition latency. This null finding suggests that a more eye-centric scanning pattern may not be a good strategy for Chinese participants to recognize Caucasian faces. Additional investigation is thus needed to explore the optimal strategy for Chinese participants to remember and recognize Caucasian faces and to determine whether specific expertise at processing Caucasian faces is needed for this more eye-centric scanning pattern to benefit face recognition. Second, Chinese participants’; more nose-centric scanning pattern for own-race faces was not related to face recognition performance when recognizing own-race foil faces. Foil faces in the test can be regarded as unfamiliar faces, because they are new and not seen during the encoding period. This null finding might be due to the fact that participants needed additional information (not only from the nose) to determine that an unfamiliar face was not previously viewed and thus make a correct decision to reject Thymine dimer as an “old” face.

The interhemispheric average angle map

The interhemispheric average angle map (Fig. 3A3) shows consistency between the retinotopic field boundaries from the two hemispheres, and reveals subtle features that were less apparent in the individual hemisphere maps. The consistency is evidenced by the clear vertical meridian lines visible in the interhemispheric color map, especially in the Cyanine5.5 alkyne Supplier region (yellow/red). The consistency between hemispheres was greatest in the vicinity of the V1/V2 boundaries (vertical meridians) in the lingual gyrus and cuneu###http://www.egf-r.com/image/1-s2.0-S2078152015300328-gr2.jpg####s. The standard deviation maps (Fig. 3B1-3) show low variability in these zones, not only for the interhemispheric average angle map, but also for the individual hemisphere angle maps. Two fields that represent the entire contralateral hemifield were identified at the ventral and dorsal edges of the statistically significant regions of the average angle maps. hV4, seen in individual subjects but somewhat obscured in the F value thresholded (F > 2.5) group average map, was identified as the region located near the posterior end of the collateral sulcus (Fig. 3A3). The interhemispheric average angle map also suggested a T-junction-shape formed by the branching of the most ventral vertical meridian on the map (in the posterior collateral sulcus) that is indistinct in the individual hemisphere maps. The location and configuration of this branch was consistent with the VO1/VO2 boundary ( Brewer et al., 2005, Wandell and Winawer, 2011 and Witthoft et al., 2013). However, because hV4 could not be clearly distinguished from V01/V02, the properties of these fields will not be discussed further. In contrast, a distinct region superior to V3d showed a smooth transition from responses to the lower vertical meridian (blue/cyan) through the horizontal meridian (green) to a clear representation of the upper vertical meridian (yellow/red). This kind of visual field map in this location of the cortex (just below the transverse occipital sulcus in the posterior cuneus) is commonly divided into two fields, V3A and V3B, with the boundary between the two located at a representation of the fovea in a retinotopic eccentricity map. However, this boundary was not clear in our eccentricity maps (Fig. 3C). To acknowledge this, only the label V3A/B was given to this portion of the whole-hemifield map as shown in Fig. 3A. Previous studies, e.g. Wandell, Brewer, and Dougherty (2005), have established that V3A is the more anterior of the two fields sharing this common retinotopic angle map.
Fig. 3A, Table 1, and Table 2 summarize that the dorsal and ventral aspects of the VCFs had strongly different mean polar angle values [Table 2: aspect] reflecting processing of lower and upper field stimuli, respectively. The mean angles were affected a little when the polar-angle significance threshold was used to limit the sampling [Table 2: aspect × PA threshold] and varied somewhat in different VCFs [Table 2: aspect × VCFs]. Mean polar angle values were somewhat higher in the right than left hemisphere [Table 2: hemisphere] across the VCFs V1, V2, and V3. The only difference in sampling type (subject sampling vs. average-map sampling) was found in a modest aspect interaction [Table 2: aspect × sampling].

These differences between phase difference conditions

These differences between phase difference conditions may be caused by differences in the materials perceived in each condition. According to Masuda et al. (2013), “bending motion” impressions were corticotropin releasing factor for 30-deg. phase differences, and “waving motion” impressions increased as the phase difference shifted closer to 90 deg. In nature, bending motions occur with high-elasticity solid objects, and waving motions occur with low-elasticity solid or fluid objects. In other words, a bending motion can be achieved by an object with solid-like properties, while a waving motion can be achieved by an object with liquid-like properties. When a perception of solidity was caused by the inducers’; oscillation, an object was perceived as harder when the oscillation had ample damping, as more elastic in specific combinations such as increased oscillation with either ample or no damping, and as more viscous when the oscillation had ample damping or the frequency of oscillation decreased. When the perception of liquidity was caused by the inducers’; oscillation, perceptual hardness was not influenced by amplitude or frequency change. The object was perceived as more elastic when the frequency of oscillation increased, and as more viscous when the frequency of oscillation decreased. The effects of amplitude and frequency change on perceived material properties are partially inconsistent with the physical equation of oscillation: In a perceptually solid-like object, the influence of frequency change on hardness and elasticity and the influence of damping on viscosity may be related in part to the physical behavior of oscillation, while the influence of frequency change on viscosity is not consistent with the physical behavior of oscillation. In a perceptually liquid-like object, the influence of frequency change on elasticity and the influence of damping on viscosity may be related in part to the physical behavior of oscillation, while at the same time, there is no influence on hardness and the influence of frequency change on viscosity is not consistent with the physical behavior of oscillation. We confirmed that amplitude and frequency changes in visual oscillation affected the ratings of materials and that their effects were partly consistent with the physical motion of oscillation.
Taken together, the phenomena we report here imply that the visual system picks up information about material properties from visual motion roughly in accordance with descriptions of physical motion by parameters in physical-motion equations.
It must be noted that physical motion is induced by various forces and various observed motion events must include them. Past studies on motion perception have reported that there is a relationship between perceived motion and the physical laws under which natural motion occurs. The phenomenon known as the “kappa effect” (Abe, 1935 and Cohen et al., 1953) is an example of this relationship. In it, the judged final position of a moving object and the perceived temporal duration of a motion are influenced by other environmental contexts related to natural motion, such as gravity, friction, energy transfer, and driving force in the observed motion. Intriguingly, such effects of environmental context with natural motion disappear when the object’;s movement appears to deviate widely from the natural laws of physics as they act upon an unpowered body (Masuda et al., 2011a). Such deviation from natural motion is potentially caused by various additional forces as typified by adding external forces or self-driving forces to a moving object.

The intersection between the type of learning acquired through active

The intersection between the type of learning acquired through active sensing and the type of learning permitted through plasticity within V1 is not at all clear. The alternative code used as an example above would require that the dihydrofolate reductase learn to appropriately combine the signals between the pair of electrodes that encode the same region of visual space. This could be facilitated both by the appropriate electrode spacing (Ghose & Maunsell, 2012) and by active sensing, which would enforce the tendency for head movements to result in translations of related pairs of signals. Alternatively, the structure of the inputs themselves (and the patterns they create across the input array) may drive local plasticity toward the extraction of those patterns, which may in turn facilitate higher-order pattern learning, such as that developed through active sensing.
To our knowledge, no studies have specifically asked whether plasticity mechanisms in the adult brain support the learning of a new code if normal sensory inputs are replaced with visually derived inputs through a cortical prosthesis. As such, the questions raised in this section await empirical answers and the viewpoints offered above are intended purely as provocative speculation that we hope will stimulate future research. Studies that directly investigate the conditions under which adult cortex adaptively reorganizes in response to arbitrarily patterned input will go a long way toward resolving the question of whether a cortical entry point for visual restoration makes sense. These studies will need to show that any reorganization is specific to the patterns that drive it and that this reorganization improves the representation of whatever information those patterns contain.
AcknowledgmentsWe are grateful to John Maunsell for helpful comments on the manuscript. This work was supported by NIHEY11379 to RTB.
RP, retinitis pigmentosa; RBC, retinal bipolar cell; CBC, cone bipolar cell; MPDA, multi-photodiode-array; FEM, finite element method
Keywords
Retinal implant; Electric stimulation; Retinal bipolar cells; Compartment model
1. Introduction
Neuroprosthetic subretinal implants for blind patients have shown that restoration of vision is principally possible (Zrenner et al., 2011). The quality of artificially created vision has not yet reached levels comparable to natural vision in humans, and this goal may be a difficult one to achieve. Subretinal implants in retinitis pigmentosa (RP) patients suffering from photoreceptor degeneration are located in the area formerly occupied by rods and cones, between the retinal pigment epithelium and the outer plexiform layer (Fig. 1).
Fig. 1. Tübingen subretinal implant and its location relative to the retina -The micro-photodiode-array (MPDA) is surgically inserted between the retinal pigment epithelium (RPE) and the bipolar cell layer into the area formerly occupied by photoreceptors. Each unit of the MPDA contains a photodiode capturing incident light, an amplifier circuit and a stimulating electrode. The light-dependent voltage generated by these electrodes primarily stimulates bipolar cells. Iso-potential lines generated by one stimulating element are shown in red. The visual information is then projected to the brain through ganglion cell axons after activation of the retinal network.Figure optionsDownload full-size imageDownload high-quality image (457 K)Download as PowerPoint slide