Monday, November 8, 2010

Challenge-Driven Learning- A great one from Global Knowledge

Challenge-Driven Learning: Our Approach to Instructional Design

Approach to Instructional DesignWe employ a unique approach to instructional design that both maximizes learning effectiveness and accelerates the development process. An adaptation of the ISD model, our approach begins with the clear definition of business challenges your organization faces as well as those that learners face in the work environment. From these challenges, scenarios are derived that give opportunities for learners to face each challenge as they world in the workplace.
Our approach is to provide a curriculum that is engaging, interactive, and relevant to the specific job tasks and responsibilities of each intended audience group. We'll create learning support devices, including explanations, demonstrations, examples, tools, war stories, and reference materials, that help learners work through each scenario.
Finally, we define a tailored learning environment to determine the optimal blend of instructor-led and self-paced delivery methods, the timing, and the measurement or tracking processes.
This approach results in learning experiences that facilitate active discovery and practice, are set in a realistic context, and are easy to recall and apply after training is complete.
We understand that the more relevant the training and information is to the learner, the more knowledge the learner will retain.
LayerKey Questions AddressedDeliverables
Business challenge
  • What is new or changing? What are the business goals behind this change?
  • What problems/difficulties will arise?
  • Whom is the change affecting?
  • Why will the individuals be motivated either to tackle or resist the challenge?
  • How will we determine success?
Requirements document:
  • Business, audience, context, and tasks analyses
  • Initial draft of learning solution (design blueprint)
  • Description of what the learner will experience
  • Design blueprint, including the following components: audiences, objectives, proposed topics, proposed sequencing
Scenario
  • What new situations will learners encounter?
  • In what context will they encounter these new situations? What are typical scenarios that they will find themselves in?
  • What will they need to do in those situations? What will success look like?
  • What is difficult about doing the new tasks? What mistakes will they likely make and why?
  • How will learners' performance on these scenarios be measured?
Revised design blueprint with the following components added:
  • Scenarios
Learning support
  • What new concepts and skills will learners need to know to handle these new situations?
  • What tools, references, and people will learners need to perform effectively?
Revised design blueprint with the following components added:
  • Learning support topics
Learning Environment
  • How will scenarios and support elements be represented, structured, and sequenced?
  • What modalities will be needed to facilitate training and performance support?
  • How will learners access learning experiences and performance support in both the training and work environments?
  • How will performance be tracked?
Revised design blueprint with the following components added:
  • Instructional strategies and methods
  • Sequencing
  • Modalities
  • Durations
  • Templates, mock up, prototype
  • Storyboard

Wednesday, November 12, 2008

Social Factors and Instructional design-A Study

Abstract
Instructional developers commonly use the Research Development Diffusion model when developing products. A major problem is that products developed with this model have failed to be widely adopted in practical settings. The authors believe the model is flawed because it fails to account for the social factors present at the adopting sites that influence adoption. The authors conclude that social factors must be incorporated into the instructional development process in order to increase adoption.

Incorporating Social Factors intoInstructional Design Theory

Technology and society are inseparable. The design, development, adoption, utilization, and diffusion of technology are inherently social processes. As Howard Segal writes in his book Future Imperfect (1994), "all structures and machines, primitive or sophisticated, exist in a social context and, unless designed for the sake of design itself, serve a social function" (p. 2). Technology and society interact and influence each other, sometimes benignly, other times violently. Technology impacts, shapes, and àredefines society and, in turn, a variety of social factors affect the development, implementation, and spread of technology.
As with all other technologies, society and the technology of instruction are irrevocably intertwined. Many instructional design theories, however, neglect or ignore the social context in which instructional products are intended to be used. The primary purpose of this paper is to provide a basic understanding of the important role that society plays in the adoption of technology and to suggest methods for incorporating societal factors into the instructional development process.
Before discussing social factors specifically, it is important to have a general understanding of why social factors are important and relevant to the field of instructional design . Social factors are important to ID because instructional products have not been widely utilized in educational and training settings (Burkman, 1987). The Research Development Diffusion (RDD) paradigm that predominates in the field of instructional development has proven to be inadequate to the task of producing instructional pàroducts that people want to use.
The RDD paradigm seems capable of producing effective instruction but is flawed by its over reliance in "Technology Push" -- a belief in the inevitable forward advance of society powered by ever improving and more powerful technology. Technology Push assumes that products which are technologically sophisticated and technically sound will be, as a direct result, widely adopted and correctly utilized. The overall failure of many large-scale curriculum development projects in the 1960s (Hall and Hord, 1987) iàs a notable example of the fallacy of Technology Push and highlights the limitations of the RDD paradigm
The current development paradigm's lack of attention to social factors and over reliance on Technology Push often result in the development of instructional products that are not widely adopted even though the products may be technically sophisticated and instructionally sound. In order to increase the utilization of instructional technologies, it will be necessary to expand the RDD paradigm and account for the many factors which impede or facilitate the adoption of instructional products. As will be discuàssed in this paper, societal factors play a vital role in the process of technology adoption. Incorporating social factors into the process of instructional development is essential to creating instructional products that are both instructionally sound and desirable to potential adopters.

Social Factors and the Adoption of Innovations

All technologies impact the society in which they are used. Toffler (1970) succinctly describes technology's impact when he writes that "new machines do more than suggest or compel changes in other machines -- they suggest novel solutions to social, philosophical, even personal problems ... they alter man's total intellectual environment -- the way he thinks an looks at the world" (p. 29). Segal (1994) adds an important point when he writes that "if, as in the significant case of the auto, modern technologày solved a number of problems, social as well as technical, from the outset it simultaneously bred or helped to breed several others, social and technical alike" (p. 30).
The literature related to the adoption of innovations is replete with discussions of the importance of societal factors. One of the most comprehensive theories of diffusion is described in E.M. Rogers' (1987) book Diffusion of Innovations. Figure 1 summarizes a number of variables identified by Rogers that influence the rate of adoption.
Figure 1. The variables which influence an innovation's rate of adoption (Rogers and Shoemaker, 1971).
As shown in Figure 1, a number of factors play a role in determining the rate at which an innovation will be adopted. What is most notable about Rogers' model is that the technological superiority of an innovation plays a relatively minor role in determining rate of adoption. Many other factors, most of them relating to the social factors present at the adopting site, play just as large a role as technological superiority in influencing rate of adoption. Among the factors identified by Rogers are: the way àthe innovation is perceived by potential adopters; the type of decision making processes at the adopting site, and; the social system (the values and norms) in place at the adopting site.
Another model of innovation diffusion that stresses the importance of social factors is the Stockdill and Morehouse Model. Stockdill and Morehouse's (1992) model is a synthesis of many diffusion theories and provides a thorough overview of the many factors that affect the adoption of an innovation. The factors are grouped into five categories: 1) educational need, 2) user characteristics, 3) content characteristics, 4) technology considerations, and 5) organizational capacity. The authors recommend that thàe change agent hoping to introduce a new technology analyze the factors included in each category. Based upon the analysis of each category, the change agent must decide whether to stop the adoption effort, reconsider the effort, or to proceed to the next category for analysis. As with Rogers, Stockdill and Morehouse emphasize that a number of factors, not only technological considerations, play a vital role in the adoption of innovative technologies.
The Rogers and Stockdill and Morehouse models point out the central theme that social factors play in the diffusion of innovations literature. Current diffusion literature is, in many ways, antithetical to the RDD paradigm's reliance on Technology Push. In spite of this, instructional developers continue to believe that instructional effectiveness and technological superiority alone will guarantee the adoption and diffusion of their products.

Limitations of Existing Instructional Development Models

Despite the central theme societal factors have in the adoption and diffusion literature, instructional products are often designed without regard to the social factors that influence adoption and utilization. One likely reason for this neglect can be found by examining the theoretical models commonly used in the field of instructional technology. These models are used by instructional designers and systems developers to manage and organize instructional development activities and to communicate the overalàl process to clients (Gustafson, 1991). Instructional development models provide the procedural framework by which instructional products are produced.
There are numerous models of instructional development. Gustafson (1991) skillfully organizes many of the most widely-used instructional development models into a logically organized taxonomy. Gustafson classifies the models into Classroom ID Models, Product Development Models, and Systems Development Models. For the purpose of this paper, we will primarily discuss the product development models.
Perhaps the most widely used instructional development model is the Dick and Carey Model (1990). While Gustafson classifies this as a Systems Development Model, it is also commonly used by instructional product developers. The Dick and Carey Model describes a development process that begins with the identification of goals and proceeds through formative evaluation, revision and summative evaluation. There is little doubt that the model provides a valuable description of all of the key ID activities and plaàces them in a logical sequence. Notably lacking from this model, however, is any mention of the social context in which the product will be implemented.
As with the Dick and Carey Model, other widely used product development models also fail to account for social context. Gustafson (1991) writes that the goal of product development models is "to prepare an effective and efficient product as quickly as possible" (p.7). While all three of the product development models reviewed by Gustafson describe a logical process for developing "an effective and efficient product", none of them contains a thorough discussion of the need to analyze the social context in wàhich the product will be used. In fact, only one of the three, The Van Patten Model (1989), even mentions the need to consider the implementation or continuing maintenance of an instructional product.
In reviewing Systems Development Models, Gustafson writes that such models usually call for an extensive analysis of the use environment before instructional development even begins. Of the five systems models reviewed by Gustafson, two -- The IDI Model and The Diamond Model -- do discuss in some detail the need for an analysis of the social context. The IDI Model (Twelker, 1972) calls for an analysis of the audience, organizational personnel, and organizational resources before development begins. The Diamond Model (1989) goes even further than the IDI Model and calls for an analysis of societal and organizational needs and for an examination of human and organizational resources before development.
The examination of the preceding instructional development models leads to three important conclusions. First, none of the most widely used product development models include an analysis of the social context as an important part of the development process. Second, product development models do not always mention adoption and diffusion, and when they do, adoption and diffusion are typically considered near the end of the development process, usually after the product has been developed. Third, while some systems development models do tend to call for a thorough analysis of social context, these models are not often used to guide the production of specific instructional products but, rather, are reserved primarily for the development or repair of broader instructional systems.

Tools for Incorporating Social Factors into the ID Process

We have seen in the previous section that most ID models don't adequately account for the social factors that influence an innovation's rate of adoption. There are, however, a number of tools that can be incorporated into existing practices to increase the attention to social factors. Incorporating these tools into existing models will create is a logical and necessary step in the evolution of instructional development theory and result in a powerful synthesis of diffusion theory and instructional developmàent theory.
Beginning with the initial research phase of instructional development, the diffusion literature tells us that key consideration should be given to the physical and social attributes of the implementation environment. Critical factors can be discovered through an Environmental Analysis (Tessmer, 1991) and an Adoption Analysis (Farquhar & Surry, 1994). Each procedure identifies key social characteristics that have profound impact on the design of instructional products.
Adoption Analysis is a process, performed during the analysis phase, by which instructional developers identify key factors that will likely influence the adoption of their product. The analysis will allow developers to account for the most vital adoption factors during the development process. An adoption analysis focuses on both individual and organizational factors. Developers look at the user characteriscs and perceptions of the potential adopters in order to determine the type of product potential adoàpters are looking for. Also, the physical environment and support systems in place at the adopting site are analyzed to determine the technical specifications that will make the product more likely to be adopted and maintained. Based upon the adoption analysis, modifications are made to the product's design in order to create a product that is desirable and practical to the adopters.
Product evaluation, both summative and formative, has long been an essential part of the instructional development process. Formative-evaluation methodologies commonly practiced in the field of software development include rapid prototyping, usability testing, implementation evaluation and field testing (Flagg, 1990; Skelton, 1992; Tripp & Bichelmeyer, 1990). Each of these methods incorporate a cycle of feedback from selected individuals within the target population. This information is used to modify desiàgn and implementation strategies thus improving the product's chances for successful adoption and utilization. We contend that the most successful formative-evaluation methods are those that are conducted in social environments most reflective of the planned implementation sites.
Ernest Burkman (1987) was one of the first authors in the field of instructional technology to provide specific suggestions for incorporating social factors into the RDD paradigm. Burkman writes that, in order to increase the utilization of instructional products, instructional development models should be more user-oriented. Burkman's User-Oriented Instructional Development (UOID) Model is a five step process, based in part upon Rogers' (1983) theory of perceived attributes, for incorporating important social factors into the development process. The five steps of the UOID Model are:
Identify the Potential Adopter
Measure Relevant Potential Adopter Perceptions
Design and Develop a User-Friendly Product
Inform the Potential Adopter (of the user-friendly attributes)
Provide Post-Adoption Support The Concern Based Adoption Model (CBAM) (Hall & Hord, 1987) is another excellent tool for incorporating social factors into the instructional development process. In their book Change in Schools, Hall and Hord (1987) describe a process change facilitators can use to bring about change in a school setting. The CBAM model stresses the need for change facilitators to understand change from the point of view of the people who will be affected by the change. While CBAM deals with change in school settings, the àtechniques described by the authors and the model's emphasis on seeing innovations from the point of view of the potential adopters are transferable to other settings.
There are several components to the CBAM Model. One of the most useful components to instructional developers is Probing.. According to the authors, the change facilitator must probe to determine how the change clients experience a proposed innovation. The authors write that change clients experience an innovation through three dimensions: Stages of Concern, Levels of Use, and Innovation Configurations. The authors also stress the importance of considering the context in which an innovation will be used. Hàall and Hord recommend that the change facilitator make an intervention based upon the analysis of the three dimensions and the context of the innovation.
The final tool that can help integrate social factors into the instructional development process is Systems Theory. Systems Theory attempts to create a holistic view of a given process by identifying all of the inputs and outputs of a system. Systems Theory is not a new concept nor is it completely foreign to the ID filed. The systems engineering approach gained popularity in the 1950s, partly as a response the prevailing view that hardware was the most important component of a successful system (Saettler,à 1990) The 1950s notion that hardware is the most important component of a system is very analogous to the prevailing notion in instructional development today -- that an effective and technologically superior instructional product is the most important factor in adoption.
Schiffman (1991) describes five views of instructional development ranging from the most narrow media-only view to a highly synthesized systems view. The systems view sees an instructional product not as a separate, isolated entity, but as an entity that will exist in a highly complex, integrated and interconnected system. The systems view represents a modern and sophisticated way of looking at an instructional product and can be a valuable tool for developers looking to increase the adoption of their products.

Conclusions and Recommendations

A complex variety of social factors influence the adoption of new technologies. The RDD paradigm, and the myriad of instructional design models based on that paradigm, do not adequately account for the importance of social factors in product adoption. As a result, instructional technologies have experienced a lack of utilization, not only in traditional educational settings, but in military and industrial settings (Burkman 1987) In order to address the inadequacy of existing models and to facilitate the adàoption of instructional products, social factors should be incorporated into instructional development models. The following recommendations are provided in the hope that they will contribute to the evolution of instructional development theory
Instructional developers should consider adoption and diffusion as strongly as they consider instructional effectiveness.Developing effective and efficient instructional products does not necessarily mean that the products are desirable or useful to potential adopters. The field of instructional development has made great breakthroughs in designing and developing effective instruction. Few breakthroughs have been made, however, in developing products that people want to use. One of the basic tenets of instructional technology is "if the objectives were not met, it means the instruction was not adequate." It seems odd, therefàore, that when an instructional product is not adopted, instructional developers often blame the potential adopters. Another basic tenet of the field should be "if the product was not adopted, it means the design of the product did not adequately plan for adoption."
Instructional developers should understand that adoption is the result of purposeful planning and does not automatically follow the development of instructionally or technically superior products.The adoption theories of E. M. Rogers and Stockdill and Morehouse discussed in this paper describe innovation adoption as a complex process that is influenced by many factors. Technological superiority is only one of a number of factors that influence a person's decision about whether or not to adopt an innovation. The complex process outlined in the adoption literature reveals Technology Push to be an overly simplistic concept and shows that instructional developers must to more than create effective products if they want to increase utilization. In order to increase utilization, developers must understand the complexity of the adoption process and develop a systematic plan that determines and accounts for the most important factors..
Instructional designers should modify their design and development models to incorporate the various tools discussed in this paper.If instructional developers are to plan for adoption as carefully as they plan for instructional effectiveness, then current models of instructional development will be insufficient for the task. Planning for adoption requires an evolutionary advance in the models instructional developers use. Emerging theories that place an emphasis on the user and on the social context in which a product will be used can be incorporated into existing product development models. Adoption Analysis, User-Oriented Instructioànal Development, Rapid Prototyping, and Field Testing are only a few of the tools that developers can use to determine and account for adoption factors.
Research should be undertaken to determine the best method for incorporating the tools into the ID process and to determine if the tools have any affect on product adoption.The tools described above have not been examined in practical settings. There is no published evidence to suggest that employing any of the tools will result in the increased adoption or facilitated implementation of an instructional product. Research into the effects of these tools is non-existent and urgently needed. Large-scale longitudinal studies that examine the impact and effectiveness of these tools on the adoption and continuation of instructional products would be laborious and costly, but very uàseful. In addition, applied research is needed into how to use the tools during the development process. How, for example, should a development team actually conduct an adoption analysis? What are the best techniques for testing the usability of a product? How can rapid prototyping assist developers in determining the perceptions of potential adopters? These are only a few of the important and unanswered questions related to social factors and instructional development.
In conclusion, it is not the intention of this paper to put forth a new model of instructional development. We agree with Gustafson's (1991) conclusions that "the literature is replete with models, each claiming to be unique and deserving of attention" (p.47) and "it appears that well over half of the ID models have never actually applied, never mind rigorously evaluated" (p. 47). The last thing the ID field needs is another untested design model claiming to be unique and valuable.
Much more importantly than putting forth a new ID model, what is really needed is a new way of thinking. Instructional developers should consider the potential adoption and implementation of their products as carefully as they consider the instructional outcomes. Put another way, the value of an instructional product should be measured by the degree of adoption and the success of implementation just as much as it is now measured by cognitive and affective outcomes. In order for this to happen, instructionaàl developers will have to analyze and account for the social context in which their products will be used. Also, developers will have to make adoption an important consideration of their design models throughout the entire ID process.
Like it or not, instructional products do more than help learners in attaining certain instructional objectives. To borrow from Toffler and Segal, instructional products suggest novel solutions, alter the way people look at the world, and simultaneously solve and breed a number of problems. Instructional products will never be widely utilized until instructional developers understand the powerful role that social factors play in adoption. Instructional developers who don't realize the impact their productsà have on society, on real people in real places, are viewing their products too narrowly and ignoring the biggest obstacle, and greatest potential, of their field.

References

Burkman, E. (1987). Factors affecting utilization. In R. M. GagnÚ (Ed.) Instructional Technology: Foundations. Hillsdale, NJ: Lawrence Erlbaum.
Diamond, R. M. (1989). Designing & improving courses and curricula in higher education: A systematic approach. San Francisco: Jossey-Bass.
Dick, W. & Carey, L. (1990). The systematic design of instruction (3rd ed.). Glenview, IL: Scott, Foresman/Little, Brown Higher Education.
Farquhar, J. D. & Surry, D. W. (1994). Adoption analysis: An additional tool for instructional developers. Education and Training Technology International, 31 (1), 19-25.
Flagg, B. (1990). Formative evaluation for educational technologies. Hillsdale, NJ: Lawrence Erlbaum.
Gustafson, K. L. (1991). Survey of instructional development models (2nd ed.). Syracuse, NY: Information Resource Publications. (ERIC Document Reproduction Service No. ED 335 027)
Hall, G. E., & Hord, S. M. (1987). Change in schools. Albany: SUNY Press.
Holloway, R. E. (1977). Perceptions of an innovation: Syracuse University Project Advance. Dissertation Abstracts International, 39, 572-573A. (University Microfilms No. 78-11, 656)
Hurt, H. T., & Hibbard, R. (1989). The systematic measurement of the perceived characteristics of information technologies I: Microcomputers as innovations. Communication Quarterly, 37 (3), 214-222.
Moallemian, M. (1984). A study of college instructor acceptance of an innovation as related to attributes of innovation. Dissertation Abstracts International, 45, 3535A. (University Microfilms No. 85-03, 250)
Okey, J. (1990). Tools of analysis in instructional development. Educational Technology, 30 (6), 28-32.
Rogers, E. M. (1983). Diffusion of innovations. (3rd ed.). New York: The Free Press.

Thursday, October 16, 2008

Tree Structure-Too poor!

This is an interesting article from Ripul of Kern...
Just thought of pasting it here..
Recently I found myself staring at another Enterprise Software Application (ESA) that uses a tree view as primary navigation. A tree view on the left is usually not the right choice for navigation in such applications. Here are some reasons why tree views make a very poor choice in an ESA:
1. Many types of artifacts: Typically in ESAs, the tree is composed of artifacts like actions, files, and tasks. This type of hybrid scheme is confusing as it makes it difficult for users to make a consistent mental model.
2. Non consistent mental model: A non-consistent mental model increases memory load and makes learning difficult. Even experienced users make mistakes if there is a hybrid scheme.
3. Many points and clicks: To find any action, file, or a task, a user usually takes many points and clicks.
4. Clicks are actually slow: Though pointing and clicking seems lightning fast, but remember Fitt's Law - each point and click takes a whooping 1 to 1.5 seconds! Each find and final click may take anywhere upwards of 10 seconds.
5. Poor navigational help: A tree structure offers poor help to find and select next logical task after completing the current one. It forces users to learn the next logical step.
6. No use of spatial memory: People use spatial memory to find artifacts on a screen. However, the tree structure does not support spatial memory - makes it harder to find artifacts. It also increases the time to find artifacts.
7. Poor location information: A tree typically provides poor navigational clues - does not tell where the user is now, what is clicked, and what is open. To provide all these clues, a software developer spends a lot of energy.
8. System model differs from user mental model: A tree helps software developers put artifacts as in the systems model. This model is usually very different from the user's mental model.
9. Takes too much of space: The tree typically takes more than 20% of screen space. This amount of space for navigating from one point to another is a waste of precious screen area.
10. No corresponding content: Software developers believe that each "leaf node" in the tree must be associated with a corresponding screen. These corresponding screens typically dont contain any content and are shown blank or with content that users never need.
11. Vertical and horizontal scrolling: In most cases the tree is open. In this position the actual content is hidden behind the scrolls. This does not help users to find out where they are.
12. Difficult to implement: Contrary to popular belief, implementing a tree view by a developer is very difficult and time consuming. The basic implementation is fast, however, tweaking the tree view for good user experience is time and resource intensive.
13. Only IT users understand deep hierarchies: In our experience, we have experienced that only IT (read developers) understand deep tree structures. Others just refuse to understand tree structures with more than 2 types of artifacts (like files and folders). So, files in a folder or folder in a folder is okay, but sub-process in a process, process in a business location, business location in a business, and a business in an enterprise in NOT okay - it is confusing.

Wednesday, October 1, 2008

Any Tom, Dick and Carey...!!!!

Have a look at the stages in Dick and Carey model...
Stage 1. Instructional Goals
* Instructional Goal: Desirable state of affairs by instruction* Needs Analysis : Analysis of a discrepancy between an instructional goal and the present state of affairs or a personal perception of needs.
Stage 2. Instructional Analysis
* Purpose : To determine the skills involved in reaching a goal* Task Analysis (procedural analysis) : about the product of which would be a list of steps and the skills used at each step in the procedure* Information-Processing Analysis : about the mental operations used by a person who has learned a complex skills* Learning-Task Analysis : about the objectives of instruction that involve intellectual skills
Stage 3. Entry Behaviors and Learner Characteristics
* Purpose : To determine which of the required enabling skills the learners bring to the learning task* Intellectual skills* Abilities such as verbal comprehension and spatial orientation* Traits of personality
Stage 4. Performance Objectives
* Purpose : To translate the needs and goals into specific and detailed objectives* Functions : Determining whether the instruction related to its goals. Focusing the lesson planning upon appropriate conditions of learning Guiding the development of measures of learner performance Assisting learners in their study efforts.
Stage 5. Criterion-Referenced Test Items
*To diagnose an individual possessions of the necessary prerequisites for learning new skills*To check the results of student learning during the process of a lesson*To provide document of students progress for parents or administrators*Useful in evaluating the instructional system itself (Formative/ Summative evaluation)*Early determination of performance measures before development of lesson plan and instructional materials
Stage 6. Instructional Strategy
* Purpose : To outline how instructional activities will relate to the accomplishment of the objectives*The best lesson design : Demonstrating knowledge about the learners, tasks reflected in the objectives, and effectiveness of teaching strategies
e.g. Choice of delivering system. Teacher-led, Group-paced vs. Learner-centered, Learner-paced
Stage 7. Instructional Materials
* Purpose : To select printed or other media intended to convey events of instruction.* Use of existing materials when it is possible* Need for development of new materials, otherwise* Role of teacher : It depends on the choice of delivery system
Stage 8. Formative Evaluation
* Purpose : To provide data for revising and improving instructional materials* To revise the instruction so as to make it as effective as possible for larger number of students* One on One : One evaluator sitting with one learner to interview* Small Group* Field Trial
Stage 9. Summative Evaluation
* Purpose : To study the effectiveness of system as a whole* Conducted after the system has passed through its formative stage* Small scale/ Large Scale* Short period/ Long period

Going through these stpes i dont find anything innovative.ADDIE,Gagne,Bloom in another bottle.May be the instructional designers wanted to create a separate theory on each and every event they face or they wanted to be in the limelight so that they would be called as instructional thinkers!!!!Any Tom, Dick AND Carey may do this!!

Wednesday, July 30, 2008

Why i object learning objectives!!!

After reading this post, you will be able to:
*Explain two of the reasons why Ranjith.A.R doesn't like learning objectives
*Explain your own view of learning objectives
*Develop an alternative approach to listing learning objectives in your next eLearning

I hate writing learning objectives. I see the value. I do. At least from the instructional designer's and the business's point of view. Learning objectives clarify exactly what it is you're trying to teach. But I find them painfully boring to read and to write.
Ray Sims has written a great summary on Writing Learning Objectives, with citations to some good resources, including Vicki Heath's post Learning Objectives: Writing Learning Outcomes So They Matter.Vicki states as the first benefit of learning objectives: "Learners can focus more easily on what is important to their actual workplace performance."Her statement is in keeping with traditional instructional design theory that says that learning objectives help learners organize their learning efforts. And yet one could argue that most learners don't even bother reading them.As Michael Allen says in Michael Allen's Guide to e-Learning: "Learners think, 'I'm supposed to do my best to learn whatever is here, so I might as well spend all my time learning it rather than reading about learning it." The objectives page is one that I always click NEXT to slide right on by.How about you? If you have ever taken an eLearning course (and be honest -- have you really taken an eLearning course?), have you taken the time to read those objectives? Really?
Write Better Objectives
One approach, as Cathy Moore demonstrates so well, is to write better objectives. See her recent post: Makeover: Turn Objectives into Motivators.Michael Allen thinks better-written objectives are a start, but wonders if any form of the "textual listing of objectives [is] really the best way to sell anyone on learning.
"Break the Rules"
Allen urges instructional designers to break the rules: "Don't list objectives."Pretty radical, isn't it? I called this one out as one of the top things I learned about learning in 2007.Instead, provide some meaningful and memorable experiences using interactivity, graphics, animation, and storytelling.
Alternatives to Listing Objectives
Here are some of Michael Allen's alternatives to listing out boring learning objectives ...
Put the Learner to work
Have the learner attempt a task. If they fail, they'll know what they are going to be able to do when they finish your program (hopefully, complete the task).
Use Drama
Create a scenario showing the risk of what could happen if the learner doesn't learn the content -- and the benefits that will happen when she does
Create a Game quiz
Instead of a traditional, boring assessment, create a game-like quiz. Based on their performance, learners will see if they are beginners or advanced, and where their gaps in knowledge might lie. And they'll be able to see what kinds of tasks they should be able to do at the end of the course.

Check out Karl Kapp's Gadgets, Games and Gizmos for Learning for some simple game ideas.
Thank you folks...!!!

Tuesday, July 8, 2008

INtErACtIoN dESign..!!

Interaction design Principles
  • Learnability/Familiarity: for example, reduce short term memory load, ensure ease of understanding and guessability, make operations visible, use appropriate metaphors.
  • Ergonomics/Human Factors: for exemple, allow for flexible input (like menus, shortcuts, panels), multiple communication, design for user growth
  • Consistency/Standards: for example, likeness in behavior, consistent and clear user interface elements
  • Feedback/Robustness: give appropriate quantity of response, offer informative feedback, let the user recover from errors or dead-ends, insure stability, task completeness and adequacy, respond in time.
  • Visibility - knowing the stat of an object and the choices available
  • Feedback - timely, in an appropriate mode (aural, visual, etc.), yet not distracting from task
  • Affordance - use object whose actual properties are in accordance with its perceived properties (e.g. an icon depicting a switch should turn something on or off)
  • Mapping - make use of the relationship between objects and their environment (e.g. placing a menu bar at the top of an application window)
  • Constraints - limit the possible interactions physically, semantically (context-related meaning), logically, or culturally (learned conventions)
  • Habituation - the use of the system should become internalized to the point that the user only thinks of the task, not the system

HCI design approaches

  • Top-down or hierarchical problem solving - working from the functional level to the specific working out issues problems that arise
  • Design by reuse - use of previous designs that are based on similar situations
  • Design problem evolution - recognition and relaxation of assumptions thus engaging in a redefinition of the problem in cycles that involve planning, translating and revising in order to optimize a system so that it can satisfy diverging and contradictory requirements
  • Design by deliberative recognition-priming - use of previous conceptual knowledge and experience to recognize useful patterns to by-pass hierarchical processes
  • Design by serendipitous recognition-priming - ideas that arise from opportunistic comparisons and analogies not necessarily directly related to the design problem.
  • Design by collaboration and confrontation - team-based design based on collaboration and confrontation activities.

Story-based design

Tom Erickson (1995) outlines some ways in which storytelling can be used as a tool for designing human-computer interactions. Stories reveal users' experiences, desires, fears and practices that can in turn drive effective user-centered design. He points out that stories, in contrast to scenarios, involve real people in particular situations and consequently involve unique histories, motivations and personalities.

  • story gathering - gathering users' stories on the users' domain (a culturally, socially and physically situated environment) thereby collecting and building a shared language, referents and questions and issues to be addressed.
  • story making - building 'scenario-like' stories that capture emerging common concepts and details from users' stories
  • involving users - using stories with users to elicit dialog and discussions that bring essential ideas and problems to light that should be considered in the design.
  • transferring design knowledge - being highly memorable and still susceptible to the uncertainty entailed in the particular being applied to the whole, “ stories become important as mechanisms for communicating with the organization by upport design transfer”, by “ capturing both action and motivation, both the what and the why of the design” (Erickson, 1995)

Personas in interaction design

Design of an interaction sets the conditions in which a conversation between a user and a system will take place. The system needs to speak and respond to the user. To envision more effectively how such a conversation may proceed, interaction designers determine user personas. Personas are defined models of intended and potential user types. These models can be defined through ethnographic research practices such as observation, interviews or direct user-testing with sample target users. Personas are widely used in user-centered design approaches.

Kemp's model

  1. Identify instructional problems, and specify goals for designing an instructional program.
  2. Examine learner characteristics that should receive attention during planning.
  3. Identify subject content, and analyze task components related to stated goals and purposes.
  4. State instructional objectives for the learner.
  5. Sequence content within each instructional unit for logical learning.
  6. Design instructional strategies so that each learner can master the objectives.
  7. Plan the instructional message and delivery.
  8. Develop evaluation instruments to assess objectives.
  9. Select resources to support instruction and learning activities.
This is the instructional design model proposed by Kemp...and i put it here just for a reference!!!Not so bad...Right??

Sunday, June 29, 2008

The 'cognitive' factor of Instructional design


Training can be approached from many perspectives. Performance technologists like Stolovich, like to think of training as a strategic investment contributing to the corporate bottom line. Another perspective is to think of training as a kind of service to people. Trainers offer a service to learners intended to improve their functioning in an environment. Thus training can be thought of as a kind of help, similar to help systems in a computer environment or aid provided by a social services.
The quality of training, like help, can be judged on its effectiveness and its efficiency. To understand this perspective, think of a software company such as WordPerfect Corporation.

Let's say WordPerfect offers a support system, where users having difficulties with software can phone and get help from a specialist. WordPerfect managers can evaluate the quality of help offered in a number of ways:

a. Effectiveness-solving the customer's problem, also known as the power of the help according to Inouye (1992). Does the help work?
-Availability of help. Is there someone around who can answer the caller's question?
-Relevance to the problem at hand. Is the information provided pertinent to the customer's immediate problem?
-Understandability. Is the help clear to the user? Are the instructions executable? Is the user able to take corrective action based on the help?
b. Efficiency-the timeliness and affordability of providing the help.
-Mean time to help. How long does the customer have to wait after phoning in for assistance until their problem is satisfactorily solved? Is the help message brief and to the point?
-Cost. What are the resources needed to provide the help? What are the costs to WordPerfect? To the customer?

Each time training is offered, there is by definition some problem to be solved, some goal to be reached. Even though not all training problems are as neatly quantifiable as on-line assistance systems, there are clear implications for training design which reinforce the lessons learned from the cognitive training models above. Training is most effective when it:
--Responds to an immediate performance need. Training should seek to create "teaching moments" wherein the learner is trying to solve a problem, clearly needs assistance, and is highly receptive to assistance that will help him/her perform better.
--Seeks to meet those teaching moments with relevant, clear instructional messages and practice opportunities.
--Doesn't give too much or too little help. Excessive training wastes money, and it interferes with the learners' developing cognitive skills in detecting and learning from their own errors (Burton & Brown, 1979).
--Doesn't get in the learners' way. People spontaneously apply a set of cognitive strategies to any situation or problem. Training should work with those strategies rather than compete with them.
The additional guidelines offered below are based on our review of cognitive training models and on the literature in cognitive learning and teaching methods. The guidelines are not mutually exclusive; for example, only one guideline specifically addresses motivation, yet every guideline affects motivation. We believe that designers can make use of these or similar guidelines as they seek creative solutions to problems in the design of all kinds of training and instruction.
Foster a learning culture.

1. Offer training, within an overall culture that encourages cooperation, risk-taking, and growth.
2. Get learners' buy-in and commitment in achieving training goals.
Motivate learners.
3. Demonstrate the value of of the training to the learners and cultivate their sense of confidence in their ability to master the objectives
Make training problem-centered.
4. Draw on authentic needs and contexts; make requirements of learning tasks similar to important requirements of job tasks.
5. Encourage learners' active construction of meaning, drawing on their existing knowledge (Resnick, 1983).
6. Teach multiple learning outcomes together (Gagne & Merrill, 1990).
7. Sequence instruction so that learners can immediately benefit from what they learn by applying it to real-world tasks.
Help learners assume control of their learning.
8. Provide coaching.
9. Provide scaffolding and support in performing complex tasks.
a. Adjust tools (equipment), task, and environment.
b. Provide timely access to information and expertise.
c. Provide timely access to performance feedback.
d. Utilize group problem-solving methods.
e. Provide help only when the learner is at an impasse and only enough help for the learner to complete the task.
10. Fade support.
11. Minimize mean time to help (i.e., provide "just-in-time" training).
12. Encourage learners to reflect on their actions.
13. Encourage exploration.
14. Encourage learners to detect and learn from their errors.
Provide meaningful "practice."
15. Provide opportunities for learners to apply what they've learned in authentic contexts. If it is not feasible to practice on real tasks, provide cases or simulations.
16. Personalize practice

Saturday, June 28, 2008

Where ADDIE fails....

Many of the instructional designers still open their textbooks when they are asked to design a new course.They tend to look at ADDIE again...
Unfortunately ADDIE fails to deliver most of the time..
So what are the things an instructional designer must rely upon...???
Blending some theories of Joseph south, i would like to narrate some points here..

1. Narrative: Story has been used to bind people together in shared knowledge and understanding for thousands of years. It is arguably the first instructional strategy ever used to convey essential cultural knowledge to the rising generations. It's an essential aspect of virtually every culture on the planet. We are wired for narrative. We think in narrative, we speak in narrative, we even dream in narrative. We perceive our very existence as an unfolding narrative. We collectively pay billions of dollars to experience well-crafted (and not so well-crafted) narrative. Narrative design needs to be deeply understood and routinely practiced in our field.How many instructional designers have even heard of the field of narratology? How many designers have studied the construction of a documentary, a screenplay, a dance performance, a musical composition? We are starting to scratch the surface with our recent attention to role-play scenarios and gaming, but have far, far to go.

2. Aesthetics: Human beings respond powerfully to aesthetic design. Every decision we make, like it or not, is mediated by our subjective perceptions. The "Bottomless Soup" study done by Brian Wansink, a recent winner of the IgNoble Prize for nutrition (and also has a book on the subject, Mindless Eating: Why We Eat More Than We Think) demonstrates this beautifully. And, of course, aesthetics don't only make us fat. They can relax us, orient us, inspire us, enliven us. Aesthetics are much more than the surface qualities of an object, but extend to encompass the richness of our experience, and the best applications of aesthetic design can embody and express layers of meaning in a profound, prereflective way. Patrick Parrish is starting the conversation in our field. This conversation needs to be accelerated and expanded.

3. Learner Emotion: Human beings feel, and what they feel influences their readiness to learn, their willingness to learn, how much they actually learn, and whether they will (ever) decide to learn about a particular topic again. As Russ Osguthorpe asks, "If they got an A in the class, and tell us that they never want to see that content again in their lives, have they really learned what we intended to teach them?" Emotions can work for or against learning. In order to account for emotion in our learning design, we need to know what learners are feeling before, during, and after learning experiences occur.We have a whole science devoted to measuring learning before, during, and after learning experiences and, ostensibly, ways to intervene based on what is learned from these assessments. Where is the science and technique of measuring the learners' emotions? What are the best practices of how to intervene based in what is learned? What makes us think we can teach effectively if we only know what learners know and not how they feel? How they feel about learning this topic, how they feel about their ability to learn this topic, how they feel right now in this learning session during this learning activity? Engagement is both a cognitive and an emotional experience. Can you imagine a flow experience in a learning setting that was devoid of emotion? Can you imagine an overwhelmed, bored, distracted, or anxiety-filled learner maximizing their learning?

Sunday, June 22, 2008

Roles and responsibilities of a Visual Instructional designer in an academic e-learning environment

In my experience, Corporate and academic e-learning differs in many aspects. Some of them are as follows,

· Academic e learning requires a lot of concepts. They tend to focus on knowledge in the abstract: They want lots of concepts but not a huge amount of application, except to test that the learner understood the concepts. The application of the concepts tend to be abstract, such as answering questions on a test or basing new concepts on the first. The learners are not expected to immediately apply the knowledge to their lives.
The corporate clients usually want both concepts and behavior change. They want learners to be able to apply the concepts in the real world, such as determining if a cross-border financial trade they have been offered is ethical and legal.
It is because of this reason, flow and relevance becomes irrelevant at some places in case of academic background. Instructional designers from purely corporate backgrounds tend to misunderstand the same.
This is my own experience and even great instructional designer like Cathy Moore has expressed the same.
All the academic e-learning environment requires a visual instructional designer who can visualize the concepts at an ID perspective as academic e-learning material give more thrust to the visuals compared to the corporate one. Most of the corporate material will concentrate on the presentation of text and there comes the importance for the structuring of sentences more. But in an academic environment the on-screen text should be presented in such a way that it make an impact in a student’s mind, as it is a basic thing on which he should build a lot of concepts.
As corporate materials need the text animation mainly the process followed will also be different. In corporate segment visualization is mostly a simple thing compared to the content part. So presentation is done before recording the voice over. It is easy to integrate the sound afterwards. But in anycase, particularly if you are dealing with K-12 segment, you must never record the voice afterwards. It can be called a sin. The educational concepts like experiments and derivations can never be written effectively in a storyboard and very difficult to synchronize with sound. So the voice must be recorded earlier and must be synchronized effectively so that the student gets a cognitive support or re-inforcement.
Where we go wrong??
I would say that, till now we were creating assets; nothing other than that.
WHY?
We all will understand that when we integrate voice for physics and chemistry files. It may not happen in mathematics. We will never be able to synchronize it effectively. Another thing is that, whoever does the animation should integrate the sound. It is a must in academic environment. Very few is smart enough to know the entire concept. Beware of that. What we were doing was to create a storyboard and throw it to the animators for animation calling for a mere visualization meeting.
Let’s analyze what happens at a visualization meeting. The ID explains the concepts???? NO... NEVER... Even though he explains there will be a huge gap in understanding those, because our content is so huge. AND YOU CAN NEVER VISUALISE DERIVATIONS AND EXPERIMENTS WITHOUT THE COMPUTER IN FRONT OF YOU i.e. YOU MUST EXPLAIN THOSE THINGS WHENEVR ANIMATORS DO IT.(Just to get an idea of this take any experiments in physics or chemistry and try running them prompting the VO).YOU CANNOT EXPECT ANY ANIMATOR DOING THIS properly..
EXPERIMENTS, DERIVATIONS, PROBLEMS ETC where there are so many highlighting and step by step procedures, which are too technical a person should be there with the animator. Here comes the role of a visual ID. Where an authoring ID should effectively write the voice over, the visual ID is responsible for making the presentation effective by choosing the proper place for on-screen text, user friendly colours, pace of the animation, effective highlighting, proper VO synchronization etc. This is the exact role of an ID in an academic elearning organization where ID acts as a bridge between education and technology. Those who manage content are also called IDs, but they are senior content writers only.
IN CASE OF MATHEMATICS
I went through many slides of our mathematics material. I really feel like having a lot of improvements, as mathematics is a subject which needs the attention of a visual ID completely. The style we use to present will depend on lot of theories which are really valid .Theories like split attention theory (explained below)
Here the diagram in the right can enhance learning compared to the diagram in the left. Theories such as cognitive load theory, colour theory, positioning theory etc. also come into picture when we do this kind of academic elearning where we present intensive study material to the user.
About my role..
Although, the importance of the flow of content cannot be forgotten, it is not the crux in a company like ours. We should concentrate on the communication with GDs. Visualization for a whole lesson should not be done at one stake. We have to concentrate on a small part. Explain it to them and be with them for each and every step. The voice also should be available. We can reduce at least 80 percent mistakes by doing so. I feel this is the best time to implement it, as we are revamping the SBs. I can be responsible for the visual presentation of the material, the on-screen text, the positioning of the images and text, the whole layout and the synchronization of voice accordingly.(Don’t mistake it with the responsibilities of animation head as he will be responsible for the animation quality).
Note: Please note that our updated SBs are also incomplete in case of animation description and on-screen text. We will get confused where we have to place the OST (These kinds of issues can be resolved utilizing a visual ID.)
I feel my services can properly be utilized if i am placed in the visualization part. (I am ready to be involved in visualization of all the subjects, giving thrust to maths and derivation, experiment parts in physics, chemistry etc.).As a part of this I have to go through the storyboard at least once and make necessary changes in OSTs accordingly-which won’t disrupt our VO recording)