|
Showing 1 - 3 of
3 matches in All Departments
The present work deals with variability, correlation,
path-coefficient and selection index for eleven quantitative
characters in chickpea. Variability of the studied characters
exhibited that they were quantitative in nature and under polygenic
control. For all the characters phenotypic variation was greater
than those of environmental components of variation. Phenotypic
coefficients of variability (PCV) in general were higher than the
estimates of genotypic coefficient of variability (GCV) for all the
characters, which suggested that the apparent variation is not only
due to the genotypes but also due to the influence of environment.
The heritability, genetic advance (GA) and genetic advance as
percentage of mean (GA %) estimates in the present investigation
were found to be low which indicates that the scope for improving
traits through selection is limited. SW/P was positively correlated
with DMF, NPBMF, NSBMF, PWFD, PdW/P, and NS/P both at phenotypic
and genotypic levels. Path coefficient analyses indicated that
PdW/P and NS/P having high positive direct effect on yield were the
major contributors to SW/P.
This book presents development of efficient reinforcement learning
methods in a postgraduate research. A reinforcement learning agent
tries every state-action pair to find the optimal policy without
prior knowledge about the domain. In large domains visiting every
state-action pair is not feasible by an agent, therefore standard
reinforcement learning approach is not applicable in solving many
real world problems. Three new methods are proposed to make the
learning efficient according to the characteristics of the
problems: Task-Oriented Reinforcement Learning reduces the problem
size by viewing it from the task's viewpoint that clarifies task
relevant state variables. Symmetrical-Actions Reinforcement Leaning
reduces the size of a learning problem by exploiting partial
symmetry over action relevant state variables and representing
actions values by a single function. Coordinated Multiagent
Reinforcement Learning technique uses coordinator-agent hierarchy
to keep the size of individual learning problems small. Depending
on problem characteristics all or any of these methods can be
applied to solve a problem efficiently using reinforcement
learning.
|
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.