Welcome to Modeling | Modeling News | Modeling Videos | Latest Modeling Trends


Wednesday, October 31, 2007

Modeling shelliness and alteration in shell beds: variation in hardpart input and burial rates leads to opposing predictions

Abstract.-Distinguishing the differential roles of hardpart-input rates and burial rates in the formation of shell beds is important in paleobiologic and sedimentologic studies, because high shelliness can reflect either high population density of shell producers or lack of sediment. The modeling in this paper shows that differences in the relative importance of burial rates and hardpart-input rates lead to distinct patterns with respect to the degree of shelliness and taphonomic alteration in shell beds. Our approach substantially complements other models because it allows computation of both shelliness and assemblage-level alteration. To estimate shelliness, we dissected hardpart-input rates into dead-shell production and shell destruction rates. To estimate assemblage-level alteration, we computed an alteration rate that describes how rapidly shells accrue postmortem damage. Under decreasing burial rates but constant hardpart-input rates, a positive correlation between alteration and shelliness is expected (Kidwell's R-sediment model). In contrast, under decreased destruction rates and/or increased dead-shell production rates and constant burial rates (Kidwell's R-hardpart model), a negative correlation between shelliness and alteration is expected. The contrasting predictions thus provide a theoretical basis for distinguishing whether high shell density in shell beds reflects passive shell accumulation due to a lack of sediment dilution or whether it instead reflects high shell input from a life assemblage. This approach should be applicable for any fossil assemblages that vary in shell density and assemblage-level alteration. An example from the Lower Jurassic of Morocco, which has shell-rich samples less altered than shell-poor samples, suggests that the higher shelliness correlates with higher community-level abundance and lower proportion of juveniles of the main shell producer, supporting the driving role of hardpart-input rates in the origin of the shell-rich samples in this case. This is of significance in paleoecologic analyses because variations in shelliness can directly reflect fluctuations in population density of shell producers.

Recognizing the differential role of sedimentation rates and hardpart-input rates (i.e., dead-shell production and shell destruction rates) in shell bed formation is important because high shell density in death assemblages can result from lack of sediment or high input of shells from a life assemblage. One of the main taphonomic paradigms in interpreting marine shell beds is that sites of slow net rate of sedimentation should be more favorable for formation of denser shell concentrations than sites of higher net rate of sedimentation (the low-dilution maxim of Kid well 1991). Kid well (1985, 1986a) built a theoretical framework for shell bed genesis and hypothesized that a model of shell bed formation could be cast mainly in terms of changes in sedimentation rate (this is known as the R-sediment model). As an alternative, Kidwell (1985, 1986a) proposed the R-hardpart model, which predicts that variations in dead-shell production and shell destruction rates primarily control the formation and preservation of shell beds. Thanks to the pioneering insights of these initial models, it is now clear that any satisfactory explanation of shell beds, and of taphonomic patterns in general, has to be rooted in sedimentation rate and hardpart-input rate. The R-sediment model has great power and robustness and is preferred because of its predictivity in terms of postmortem bias and biotic interactions (Kidwell 1986a). As many shell beds are indeed preferentially associated with omission or erosional surfaces (Kidwell and Jablonski 1983; Kidwell 1989), the R-sediment model has been supported and successfully used in sequence stratigraphic and environmental analyses (Beckvar and Kidwell 1988; Kidwell 1993; Abbott 1997, 1998; Naish and Kamp 1997; Kondo et al. 1998; Fürsich and Pandey 2003; Yesares-García and Aguirre 2004; Cantalamessa et al. 2005; Parras and Casadío 2005). Sequence stratigraphic simulations also show that uniform stratigraphic distribution of fossils can be changed to nonrandom and clustered distribution because sequence architecture is strongly controlled by sedimentation rates (Holland 1995, 2000).

The R-sediment model (Kidwell 1985, 1986a) predicts that there will be positive correlation between shelliness and taphonomic alteration because shells are exposed longer when sediment dilution is low. Also, it predicts that with a decrease in sedimentation rate, an increase in shelliness will be associated with an increase in time-averaging (Fürsich and Aberhan 1990; Kowalewski et al. 1998), morphologic variation, and a change in community composition. However, the predictions of the R-hardpart model have not previously been explored fully.

Further analyses of the interplay between hardpart-input rate and sedimentation rate are necessary for better understanding of the dynamics of shell bed formation. First, it is of primary interest to know to what degree high shell density corresponds to original live abundance or whether it reflects only the effect of passive accumulation. In turn, it can be important to distinguish whether rarity of shells in shell-poor deposits is due to low hardpart-input rate or high background sedimentation rate. If the role of hardpart-input rates and sedimentation rates in formation of shell beds can be differentiated, these questions can be answered. Although community-level abundance in the fossil record is mostly assessed in terms of relative numerical abundance, the recognition of fossil macroinvertebrate populations with originally high density is also of ecologic importance because dense populations of shelly organisms play an important role as ecosystem engineers in aquatic habitats (Gutiérrez et al. 2003).

Integrating ERD and UML Concepts When Teaching Data Modeling

In this paper, we describe a teaching approach that evolved from our experience teaching in both the traditional database and systems analysis classes as well as a number of semesters spent team-teaching an object-oriented systems development course. Fundamentally, we argue that existing knowledge of structured systems development can and should inform our teaching processes when teaching object-oriented systems development techniques. We draw from an anecdotal industry example provided by one of our former students to illustrate the value of this approach given our perception that there is a need in practice today to easily shift from structured to object-oriented thinking.

There seems to be growing interest in adopting the Unified Modeling Language (UML) within information systems (IS) curricula, and some authors of database texts have expressed interest in changing their widely adopted books to include the UML notation when representing data models. One might argue that notation is only syntax; therefore, a change in notation should not require a change in the content or approach used in teaching data modeling techniques. However, the interest in UML suggests consideration of a more fundamental question: Should we rethink the processes taught in our database courses to more closely align the way we think about data with the way applications are developed?

Currently, most database courses use entity-relationship diagram (ERD) techniques for data modeling. The traditional ERD has a rich theoretical basis and is specifically intended for modeling relational database structures (Chen 1976, 1977; Date 1986; Martin 1982). Clear guidance exists in many academic and practitioner books about how to use this method to develop conceptual models and transition them to logical forms (including normalization practices) and physical forms that are focused on tuning for performance (Chen 1977; Hoffer, Prescott, and McFadden 2005; Martin 1982). Further, some empirical studies suggest ERDs are often more correct and easier to develop than corresponding object-oriented (OO) schémas (Shoval and Shiran 1997).

Advocates of the UML suggest that the class diagram should replace the ERD notation and approach to data modeling. Class diagrams provide the same opportunity to document data and their relationship as ERDs do. In addition, class diagrams provide for the capture of operations. This allows for the modeling of relational data but also provides rich support for object-oriented implementations in the form of OO program languages (i.e., JAVA) as well as a more component-based approach (i.e., J2EE). Moreover, the UML includes mechanisms for modeling behavior, and the acceptance of the UML as an Object Management Group (OMG) standard provides wide support in industry for using the UML, especially for the design of object-oriented software (Halpin and Bloesch 1999). While some might advocate ERD versus UML as contrasting methodological perspectives, we see advantages of teaching both methods to information systems students in an integrated fashion.

Understanding the implications of the adoption of UML notation in the database class cannot be undertaken without consideration of the other courses in the IS curriculum as well as indications of the future of practice. In a recent panel discussion focused on database course content (Vician et al. 2004), the interrelationships among courses in the IS curriculum was discussed. Specifically, the panelists advised that when the core systems analysis course uses an objectoriented analysis and design approach, using a more traditional approach in the database class may lead to inconsistencies that hinder student learning.

A review of the trade literature suggests that transitions to object-oriented databases may be on the horizon; with Oracle and IBM indicating willingness to "go beyond classical relational dogma" (Monash 2005; p 26). Some organizations have adopted object-oriented databases for storing important corporate data (Lai 2005). Further, as more applications are developed in OO languages such as Java or C++ while utilizing data stored in a relational database, IT professionals are left to write custom code to access the data or rely on object-relational mapping (Walsh 2005). Growing interest among firms such as Oracle as well as the ubiquity of relational databases suggests that mapping OO applications to relational data storage is the likely immediate experience current MIS students can expect in practice (Krill 2005).

Should we rethink the processes taught in our database courses to more closely align the way we think about data with the way applications are developed? This is our fundamental question. In this paper, we draw from our experience in the classroom and in practice to make recommendations for how the UML notation can be integrated with ERD when teaching data modeling.

Saturday, October 27, 2007

Essential Skills of Data Modeling, The

The critical data modeling issue is learning to think like a data modeler. The representation method is a less important concern, because all dialects of these methods capture the same core data. For data modeling teachers, there are two issues. First, what representation method enables quick sketching of models on a board? Second, what method should students use to capture the fine detail for their assignments? Other issues related to teaching data modeling are also discussed, including the argument for intertwining the teaching of data modeling and SQL

Approximately 20 years of teaching data modeling have provided ample evidence to me that the critical issue is not how to represent entities and relationships. The essential skill is learning to identify entities and the correct relationships among them. Students demonstrate over and over again the difficulty that they have in learning to think like a data modeler. The various representation methods, such as Chen's E-R diagram (Chen 1976) and UML (Rumbaugh, Jacobson, and Booch 1999), are easily learned. However, proficiency with a representation method does not make a data modeler. Data modeling is a higher-level skill than drawing a diagram.

I reiterate continually to my students, and in my textbook (Watson 2006), that the purpose of a data model is to capture reality so that the database based on the data model can be used to answer questions about reality. A model that fails to represent reality is likely to fail at some point because a client's question cannot be converted to SQL. When realworld entities or relationships are not represented in a data model, then real-world queries about these missing entities and relationships cannot be answered. I learned this need to focus on reality representation early in my career when I was given data collected by pharmacy researchers who needed some statistical analysis. The very first analysis they asked me to run, which was central to their research, could not be answered because they had not collected data on the relationship between two central entities. The data model did not represent the reality they wanted to study, and I could not answer a key question about that reality.

Some of the typical errors that students make include:

* Not recognizing that an attribute is an entity

* Failing to generalize several entities as a single entity

* Not reading a relationship both ways and thus making a cardinality mistake

* Ignoring exceptions that result in a failure to represent reality.

These are problems of domain understanding and not representation. It does not matter how you represent errors of domain interpretation. They are still errors.

As far as I am concerned, reality is far more important than representation. Even if there were a single best representation, and I don't believe there is, as I will argue shortly, it is worthless if the resulting data model does not capture the domain of interest's reality. It is all about reality and not so much about representation.

2. REPRESENTATION CHOICES

From a teaching perspective, there are two issues with regard to representation. First, I need a method that enables me to quickly sketch a model on a board. Second, I need a method that my students can use to capture the fine detail of a model and generate SQL create and constraint statements.

Thus, I don't believe that one can argue that one data modeling dialect is superior to another without considering the context in which it is used or the tools available. If a student comes to my office to ask for help with a model, I find it far easier to work with them using a sheet of paper or whiteboard than sitting at a computer using a CASE tool.

2.1 Minimalist, Quick Modeling

For quick modeling on a board or sheet of paper, when dealing with a class or small group of students respectively, a data modeling dialect is required that quickly records entities, identifiers, attributes, and relationships. It should also be a system that is quickly amended in response to class questions or recognition of exceptions. Thus, I don't worry about modality (optional or mandatory), but I am concerned about recognition of weak entities (represented by a '+') because of their impact on primary keys and maybe foreign keys. My models (see Figure 1) sparsely capture the essentials.

2.2 case Tools

In my current class, students use DB Visual Architect (DB VA) under the auspices of Visual Paradigm's academic partnership. DB VA is typical of the E-R modeling tools in common use in industry. Models can be drawn quickly and fine detail recorded, such as modality and data type. When the model is complete, students can generate SQL statements for creating the database.

As Table 1 shows, translation between the two representations' dialects is a simple task, and my students have had little difficulty in making the transition between my board drawings and DB VA. Indeed, some students directly translate my drawings into DB VA format in class so they have a record of all the models I have discussed. I had to cover two aspects of DB VA with the class that were not immediately obvious: representation of recursive relationships and weak entities.

Employment Analysis of ERD vs. UML for Data Modeling

In this paper we combined keyword search and content analysis to investigate the employment demands of ERD vs. UML in the context of data modeling. We found that data modeling is one of the popular skills required by many of the general business information systems professionals without specific methodology attached to the employment requirements. ERD and UML are required in narrower and often separate fields of building data models for information systems, in which ERD is mentioned in the domain of system analysis and design and database administration, while UML is required mostly in software engineering and application development. The division between ERD and UML in professional data modeling remains, although there is a small number of employment requirements that demand both methodologies and require a mapping capability between the two approaches.

Information and communication technology is one of the fastest changing fields in employment skills. This has resulted in constant revising of the academic curricula and textbooks to best match the education objectives and demands in professional skills (Gorgone, et al. 2002). In IT education, the various correlations of the recent IT employment opportunities and college students entering this field further stress the needs for examining the employment skills. The available employment opportunities published on the Internet have provided not only a job search tool for many job seekers but also a systematic monitoring of changes in employment skill demands in real-time (SkillPROOF Inc. 2004).

In this study, we analyzed IT employment data published by employers and focused on the ERD(Entity Relationship Diagram) vs. UML(Unified Modeling Language) requirements in the field of data modeling. We hope that our findings can supplement education decisions on what we should include in our teaching scopes and identify trends in the required professional skills

2. EMPLOYMENT DEMANDS

The employment data used in this study is published by individual companies on the Internet and collected daily by SkillPROOF Inc. since the beginning of 2004. The data is collected from up to 137 IT-focused companies. Each data sample contains attributes of company industry, posting date, job title, job responsibility, skill and education or training requirements. The general and background information of the data collection and categorization can be found on the website ofSkiIlProof.com (SkillPROOF Inc. 2004).

From the archived data, there are total of 35,932 jobs. The job counts from the top 12 industries (among a total of 46 industries) are plotted in Figure 1 to illustrate the overall distribution of the employment demands. The distribution reflects the post dot.com and post Sept/11 IT employment demands.

3. DATA ANALYSIS

We first applied keyword search to the job description to categorize the relevant ERD vs. UML skill requirements according to industry and then according to job types or job functions. We further sampled the contents of job description to investigate the implications for the job requirements in ERD vs. UML.

3.1 Keyword Categorization According to 'Industry'

We used the keywords of 'data model' or 'data modeling', 'ERD' and a few commonly referenced design tools like ErWin(2006), Visio(2006) and a database tool 'TOAD'(2006) to search for data modeling and database analysis, design and management related employment requirements. Similarly, we used the keyword 'UML' to search through the same data sets. We classified the search results according to the top 12 industries identified in Figure 1 and plotted the job counts in Figure 2(a), (b) and (c) below for comparison. One extra industry 'pharmaceutical' is added because the job counts in that industry are within the range of interest in the new aggregation.

Both ERD and UML keyword searched job counts are notably reduced to about half compared with data modeling searched job counts. Overall, the distribution for data modeling and ERD are similar in that both job spectra are broadly distributed across the industry line. They both share three common demanding industries: defense, high-tech and IT consulting. One exception is that the retailing industry appears to be very significant in requiring data modeling skills whereas the telecommunication is an industry requiring significant employment of ERD skills.

For the skill demand in UML, the outcome is significantly different. The defense industry is very noticeable in its relatively large job number in requiring UML skill. In terms of the job counts, the demands for UML are similar to the demands in ERD in both high-tech and IT consulting. However, they are both less than 25% of those counted in the defense industry.

3.2 Keyword Categorization According to 'Job Type' Another classification use keyword search is to sort all the job requirements according to the job function or job title. Use a standard job classification, we were able to use keywords of 'data modeling', 'ERD' or 'UML' to plot the searched skills vs. the types of jobs defined. The search results are summarized in Figure 3(a), (b) and (c) respectively for 'data modeling', 'ERD' and 'UML'. In this classification, we find that, again, UML has a lone popular job type as 'software development' whereas 'data modeling' or 'ERD' skill demands are more evenly distributed among the various job types, such as business analyst, software developer, IT consultant, technical writer and database administrator. One exception is that the job type of 'project manager' shows a significant number in ERD searched results but is not as popular in the 'data modeling' group.

Thursday, October 25, 2007

Paint in Spain - Choice of Primers and Sealers

Once you have decided that you need a Primer or a Sealer to get the best results from a painting project, it is important to be able to carefully choose the best type of product for the application.

As with paints, there are two broad classifications of primers and sealers: latex or water-based products, and alkyd or oil-based products. Both types can be used for either interior or exterior use. In addition there are also shellac-based primers that have alcohol as their thinner.

In the majority of cases, quality latex primers and sealers perform as well as oil-based products, if not better. However, on severely staining wood and on heavily chalked surfaces, oil-based primers can provide far better adhesion and the ability to block staining.

Exterior Applications

The requirement of using a primer or sealer — and the type of product that should be used obviously can and does vary for each and every painting project. Here are some general guidelines for common applications:

For brand new Unpainted Wood

If the wood is not severely stained then you could use either a quality acrylic latex or an oil-based exterior wood primer. In the case of severely stained an oil-based stain-blocking primer would be a better choice. It is always a good idea to prime and paint bare wood within say 10 to 14 days in order to keep the wood fibres from deteriorating and reducing adhesion to the primer. Many modern paints do need an undercoat in that a first coat will act as the undercoat and the second coat as the final finish.

Weathered and Unpainted Wood

Here you can use either a quality latex or oil-based primer. It is very important that you clean and sand the wood thoroughly before priming because any deteriorated wood fibres must be removed, or adhesion of the primer will not be so effective. It is best practice to apply the primer as is practical after surface preparation.

Previously Painted Wood

All loose or flaking paint should be scraped off as required and rough edges feather-sanded. Any bare spots should to be sanded thoroughly and dusted off. In addition, as much chalk as possible should be removed before priming. If the old paint is very chalky, and all the chalk cannot be removed, recommend an oil-based primer. If the old paint is in sound condition and is still adhering well, priming can be beneficial, but is not necessary.

Stucco and Other Masonry

On new masonry, or older surfaces that are very porous, a good latex masonry sealer or primer would be the order of the day. Where you are repainting use a sealer only where the all old paint has been removed during surface preparation.

Aluminum or Galvanized Iron

Clean the surface using a non-metallic scouring pad or steel wool (be sure all steel particles are washed off). Then apply a corrosion-inhibitive metal primer to all exposed bare metal.

Ferrous Metals

Remove any rust by wire brushing. Clean and gently rinse off and allow to dry before applying either a latex or oil-based rust-inhibitive primer. I would suggest two coats of primer will provide added protection against future rusting. Rust can come through at a later stage if not properly treated at day one and ruin an otherwise perfect job.

Interior Applications

Usually an interior primer is designed for a very specific application. These can be in both latex and oil-based formulations, so you have a choice of products to choose from. Please however bear in mind that latex products are much lower in smell which is a very large bonus when considering any indoor projects. Particularly if you are living in the property at the time. Freshly Plastered Walls using English Plaster or Dry Lining

Here a paint that will breathe would be best something like Dulux Trade Fast Matt springs to mind. Using this paint you can paint fresh plaster quicker. It is suitable for dry lining and is ideal for critical lighting. Dulux Trade Fast Matt is a great paint for fresh plaster.

Article Source: http://EzineArticles.com/?expert=John_E_Lewis

Solar Home - Conserving PV Power

Solar homes. Many people talk about them, although few in the U.S. do more than talk. They are said to reduce utility bills. Some say a solar home cuts utility bills drastically. Some are able to sell excess power back to the power grid. Savings are not automatic, however. Steps must be taken to make the solar home efficient; to conserve the electricity that is produced.

Ways to conserve solar photovoltaic power in the home must be found. The home owner must actively seek out ways to make those photovoltaic (PV) panels on the roof as efficient as possible. Photovoltaic power is free, but it cannot be generated in unlimited quantities. It is important to take measures to use it to best advantage.

Of all the ways to conserve solar photovoltaic power in the home, the most important is probably replacement of power-gobbling electric appliances. Home lighting can also be changed to conserve solar photovoltaic power (PV power). Finally, power usage can be reduced by making every person in the solar home conscious of conservation methods.

12 Ways to Conserve Solar Photovoltaic Power in the Home

1. Shop for energy-efficient appliances and home electronics for your solar home. Most products in this category carry EnergyGuide labels. These labels give an estimate of the products' energy consumption or energy efficiency, and show the ratings of similar appliance models. In the U.S., appliances and home electronics that meet strict Department of Energy (DOE) and Environmental Protection Agency (EPA) energy efficiency criteria carry Energy Star labels.

2. Replace an electric water heater in a solar home with an energy-efficient propane or natural gas heater. Connect the new water heater to a solar water heater system. The sun will preheat the water, and the new unit will have less work. Wrap the water heater with thermal wrap to retain heat.

3. Replace the solar home's electric furnace with an energy-efficient propane or natural gas unit. Install a solar panel air heater to reduce the work load of the new furnace. If you are building a new solar home, consider passive ways to heat the floors and air, further reducing the workload.

4. If the solar home is in a hot, dry climate, use an evaporative cooling unit in place of an electric air conditioner.

5. Check weather stripping around all windows and doors. Seal cracks and openings. Stopping air leakage in and out can improve indoor climate control in every season.

6. Insulate the solar home well. Insulation will keep heat or cool air from escaping. It will also keep outdoor cold or heat from entering readily.

7. Use heavy, insulted drapes and window shades/blinds in the solar home to keep out hot or cold air, making cooling and heating units work more efficiently.

8. A serious solar home owner can save even further by using a solar oven for cooking whenever possible. Especially good in hot climates or summer months, a solar oven will cook food with solar energy, and avoid heating the home. Money is thus saved in two ways.

9. Control of computer usage is another of the many ways to conserve solar photovoltaic power in the home. Monitors should be turned off whenever the computer will not be in use for 20 minutes or more. The CPU and monitor both should be turned off when the unit will not be used for 2 hours or more. Power down or sleep modes should be set.

10. Replace light bulbs with Energy Star qualified compact fluorescent light bulbs. If every U.S. home replaced just one light bulb with one of these bulbs, we would save more than $600 million in annual energy costs. To the solar home owner, such savings are vital.

11. Maximize your use of daylight, turning on lights only when necessary. Use windows to advantage, and install skylights where possible.

12. Landscape your property to block the heat of summer sunlight, permit winter sunlight. Deciduous trees near the solar home will shade windows in the summer, and let warming sunlight through in winter months.

Ways to conserve solar photovoltaic power in the home are not limited to these twelve, but if these alone are used, the increase in efficiency will be tremendous.

Article Source: http://EzineArticles.com/?expert=Anna_Hart

Monday, October 15, 2007

HOW HIGH? HOW FAST? HOW LONG? MODELING WATER ROCKET FLIGHT WITH CALCULUS

We describe an easy and fun project using water rockets to demonstrate applications of single variable calculus concepts. We provide procedures and a supplies list for launching and videotaping a water rocket flight to provide the experimental data. Because of factors such as fuel expulsion and wind effects, the water rocket does not follow the parabolic model of a textbook projectile, so instead we develop a one-variable height vs. time polynomial model by interpolating observed data points. We then analyze this model using methods suitable to a first semester calculus course. We include a list of questions and partial solutions for our project in which students use calculus techniques to find quantities not apparent from direct observation. We also include a list of websites and other resources to complement and extend this project.

Water rockets provide an easily implemented, engaging activity that allows students to experience first-hand how scientists use mathematical modeling and the tools of calculus to determine properties not apparent from the raw data alone. In this activity, students develop a model for the rocket height of a single water rocket launch after videotaping it in front of a building of known dimensions and then use calculus concepts to analyze the rocket performance. Water rockets (see Figure 1) are cheap, re-usable, easy to launch, and have a very high fun-to-nuisance ratio. They also provide an example of some of the fundamental aspects of projectile behavior, while factors such as the propulsion mechanism and wind effects give them more complex flight paths than those of simple projectiles. Although there is a lot of available information about combustible-fuel model rockets (see section V), they move too fast for easy measurements and require too much space. Water rockets are not as expensive nor logistically difficult as combustiblefuel model rockets, yet they have just enough complexity in their flight paths to provide an opportunity to put the skills and concepts learned in calculus to work in a substantial way.

One of the main goals of the activity is to provide a physical context incorporating many central calculus concepts. Another objective is for students to create and interpret a model and realize that it provides information not available from the experimental data alone. In particular, once the rocket rises above the backdrop of the building, there is no longer a frame of reference to estimate its height experimentally, thus necessitating a mathematical model. Also, questions such as finding the impact velocity and maximum height can only be addressed with a model; these quantities cannot be determined from the raw data or even from the videotape. Students also explore some of the limitations of such modeling efforts.

Most standard calculus (or physics) textbooks include a parabolic model describing the flight of a projectile. In the one-dimensional case, the function s (t) = v^sub 0^t - ½ gt^sup 2^ models the vertical position of a projectile with respect to time t, where g = 32.2 ft/sec^sup 2^ is Earth's gravitational constant and v^sub 0^ is the initial velocity. Other than the initial acceleration, this model considers only gravity acting on the rocket, and ignores air resistance, for example.

Experiments demonstrating this parabolic model often require access to a large windless space (such as a hangar) and a mechanism to measure the initial velocity, neither of which was readily available to us. Rather than trying to design an experiment to demonstrate the parabolic model, or to fit a parabolic model to the experimental data, we decided to explore the vagaries introduced by wind (irregular gusts, not just air resistance) and varying propulsion by modeling individual water rocket launches.

To collect data, students first launch the rockets against a backdrop of known height, such as a building, and videotape the experiment. Then, they use the dimensions of the backdrop, the position of the camcorder, and an analysis of the videotape to determine the rocket height at several times. With these data points and a graphing calculator or CAS, students can use curve fitting tools to construct a higher order polynomial model of the height of the rocket as a function of time. We use polynomial interpolation since it is easier to motivate within our curriculum than, for example, a least squares fit, and a simple interpolation command is available in Maple. However, any curve fitting technique available on a graphing calculator or CAS would work as well.

This project is intended for students taking single variable differential calculus. Students must be able to determine and interpret first and second derivatives. Substantial trigonometry and three-dimensional geometry are required for estimating the heights from the videotape of the rocket against the building, which provides an excellent review and valuable context for these mathematical foundations. Students will need to solve several equations in several variables for the polynomial interpolation, or use a graphing calculator or CAS with polynomial interpolation capability. Advanced students familiar with calculus in three dimensions may do more advanced modeling and analysis, such as developing a parameterized curve for the rocket's flight path and using it to consider such aspects as arc length and tangential velocities

Do nanofibers improve filter performance? Coalescing filters separate small liquid droplets from gas streams or from another liquid phase. Extensive m

Recent experimental works have demonstrated the benefits of adding nanofibers to microfiber nonwoven filter media. In this work, single fiber efficiencies and drag are applied to model filter performance for steady-state coalescence of oil drops from air streams. The model results show the same trends as observed in the experiments, namely that the addition of small amounts of nanofibers significantly increase the Quality Factor. New results from the model and experiments show that there is an optimum amount of nanoflber.

INTRODUCTION

Recent work shows improved performance of nonwoven filter media by the addition of small amounts of nanofibers. (1) The purpose of this work is to determine whether there is an optimum amount of nanofibers to add to the filter media.

Our approach to this project is to model the filter using single-fiber capture mechanisms and single-fiber drag forces. The coalescence filter is assumed to operate at steady state with a uniform saturation of 10% (a typical value from our experimental data). The filter performance is determined using the Quality Factor. The model results are compared with experimental data.

Our model results show that there is an optimum amount Of nanofiber. The highest Quality Factors occur when the ratio of nanofiber surface area to micro fiber surface area is in the range of 1.0 to 2.0. Our experimental results agree with the optimum occurring in the same ratio. Qualitatively, the model and experimental results are similar, but the model over-predicts the value of the Quality Factor due to simplifying assumptions used to developing it.

Coalescing filters are used throughout industry to separate small liquid droplets from gas streams or from another liquid phase. Several factors influence the efficiency and economics of the separation. In general, droplets in the 0.1- to 0.8-micron ([micro]) range are the most difficult to remove. Polymer nanofibers, made in our laboratory, provide a flexible and adjustable system for optimizing the filter structure to capture particles in the size range that has the highest probability for passing through the filter.

Unlike other filter media whose primary purpose is to stop particles from moving with the fluid stream, coalescing filter media have the additional requirements of making the drops coalesce into larger drops and of providing a means for them to drain out of the medium. In operations such as gas compression, coalescing filters may be used upstream of the compressor to protect the equipment. They may also be used downstream to collect compressor oil. The compressor oil is typically an expensive synthetic oil used in the compressor as a coolant, sealant and lubricant. Coalescence filters are used to recover and recycle the oil back to the compressor. Recovering even smaller droplets also reduces airborne emissions in many processes and helps in regulatory compliance.

There are a number of mechanisms that control the coalescence filtration process. (2) The process is sketched in Fig. 1. Single-fiber capture mechanisms (3) control the rate at which drops are captured within the filter media. The filter media act to slow the movement of drops, helping them to collide. Microscopic observation of the coalescence process shows that most of the drops visible to the microscope (20- to 200-[micro] range) are captured on the fibers. (4) When the captured drops form beads on the fiber that are large enough to see with an optical microscope, bead growth is rapid. (5) Drag of the gas phase, together with gravity forces, causes the enlarged drops to migrate out of the filter media.


Several parameters, including pressure drop and capture efficiency, characterize the performance of filter media. It is convenient to have one parameter that accounts for multiple effects. Brown (3) recommends using the Quality Factor, QF, defined by:

QF = -ln([C.sub.out]/[C.sub.in])/[DELTA]P

where ([C.sub.out]/[C.sub.in]) n is the penetration defined as the ratio of the partide concentration passing through the filter to the particle concentration entering the filter, and [DELTA]p is the pressure drop. The nature of capture efficiency is such that if you double the thickness of a filter medium, the penetration decreases by the square of the thickness, hence the logarithm of the penetration is proportional to the thickness. Conversely, the pressure drop is directly proportional to the filter thickness. Hence, ideally, the Quality Factor is independent of the medium thickness and provides a means of direct comparison between various media.

The numerical model applies volume-averaged continuum equations to account for conservation of mass for the gas and liquid phases. Capture rates are calculated for the dominant mechanisms of Brownian diffusion and direct interception using literature correlations. (3)

The gas-phase momentum balance is applied to determine the pressure drop. Drag correlations for flow around fibers are determined from literature correlations) The capture and drag correlations account for continuum, slip or molecular flow regimes, depending on the Knudsen number for the materials.

Thursday, October 11, 2007

Job Analysis and Compensation Modeling for Credit Staff and Mid-level Management: A Pilot Study

Who works for me? What do they do? How should I decide how much to pay them? These are questions that credit executives increasingly struggle with as they deal with operational changes driven by the development and implementation of automated systems for data management, transaction processing and the application of sophisticated credit management tools such as credit scoring models. In addition, pressure to reduce operating expenses and improve profitability in an increasingly competitive global environment has led to the regular practice of reducing headcount, especially at corporate operating levels. This has led to greater centralization and the development of shared service centers designed to perform transactions and functions that tend to be common across different divisions within a larger business entity.

In addition to centralization and shared service centers, many companies have begun to organize workflow through the use of cross-functional teams as opposed to the more traditional "functional silo structures." Much of this reorganization has been driven by the management theories suggesting that flatter organizations in which knowledge and tasks are shared are both more efficient and more effective at serving customer needs.


The purpose of this study is to examine the impact that business process restructuring has had on the skills, responsibilities, and compensation of credit, collections, accounts receivable, and cash application (hereon referred to as "credit management" or "credit") employees at the staff and management levels. Rather than examining the change itself, this is a study of the current state of organizational structure, job functions, and compensation in the credit area resulting from the recent organizational changes that have taken place. Specifically, this study reports the results of an analysis of 122 job functions in the credit area and provides a model explaining how compensation is driven by skills and responsibilities related to each job. Other studies, namely The Future of Credit (1998) and Future Trends in Credit and Accounts Receivable Management (2005) have identified and outlined the significant organizational and process changes that have affected credit management in the past ten years.

The findings of this study reflect the impact of the general trend in business process restructuring. First, there appear to be three basic organizational structures in which credit and A/R management operates. The "functional silo" structure is still firmly in place, however, in many cases such structure has been centralized into operations referred to as "shared service centers. " The second structure observed in the study is more of a pure "cross-functional team" model. The third type might be defined as a hybrid structure, in which a company uses a combination of structures designed to fit the needs of its operation and customers. Due to the small sample size, only 12 companies, it is difficult to reach any meaningful conclusions on the impact of such structures on compensation, and as such, that will be the topic of a future study using a larger selection.

This study does provide interesting and meaningful results regarding the impact of different "compensable factors" (skills, qualifications, level of responsibility, reporting level, etc.) in determining both compensation and whether a job is classified as exempt or non-exempt from overtime and other requirements of the Fair Labor Standards Act (FLSA). Contrary to the conventional understanding that many jobs in the credit function can be performed by individuals without a college degree, a significant number of staff positions studied require either some form of post-high school degree or equivalent skill levels. The research methodology considered the impact of 20 different compensable factors on compensation. After adjusting for correlation among the factors, the salary model includes a total of nine compensable factors.

Research Methodology

Data for the project were gathered by interviewing senior credit managers at 12 participating companies. A demographic breakout of the companies is included in Exhibit 1.

Three of these companies structure their credit organization into cross-functional teams. Six companies use a shared service structure, which is typically centralized into one physical location. The remaining three use a combination of structures within the company. It is interesting to note that the three companies using cross-functional teams can also be characterized as having relatively focused product lines and both more concentration (fewer, but larger) and somewhat less diversity in their customer bases. The three using combinations of structures were among the larger companies in the group. Among the companies using shared service centers, it is interesting to note that, while such centers process transactions for multiple divisions of each business, many of the more specific customer service functions, such as deduction resolution and quality issue, are often left to the product divisions themselves.

Modeling Civic Engagement: A Student Conversation with Jonathan Kozol

Jonathan Kozol's visit to Portland, Oregon, in April 2005 included a dialogue with 55 urban middle and high school students about inequities in American schools. Students left this conversation with a stronger sense of the systemic impediments to equal education. They also felt that their voice had been heard on a topic of national import. This essay suggests that Kozol provided students with a model of patient civic engagement and that teachers who use Kozol's work should build on this framework.

Student voice is deeply important. However, by the time students reach secondary school, many already are alienated not only from their role as students, but also as active participants in our democracy. Teachers need to awaken students' sense of themselves as active participants in their own learning and, ultimately, in the decisionmaking processes at all levels of our society (Brown, Higgins, and Paulsen 2003).

Social Foundations of Education is one course that can help teachers understand the forces that render many students' voices ineffective. This university course examines these essential questions: What is education for? Who does education empower? For whom is it debilitating? How can we build more just and equitable schools? Jonathan Kozol has been a harbinger of possibility in their quest for social justice. His work has challenged them and their students to critique "what is" and to build "what could be."

In April 2005, Kozol visited a small urban university in Portland, Oregon, to deliver a public lecture sponsored by the school's Center for Ethics and Culture. As is often the case with such engagements, Kozol was in demand, both on campus and among community leaders. Because Kozol loves engaging with young people, he was asked to speak with local youth. Thus began efforts to involve 55 middle and high school students from schools located in high-poverty, racially mixed neighborhoods in a dialogue with Kozol. This paper tells the story of this remarkable conversation, with a focus on the effects it had on the students' perceptions of education in the United States, and of their abilities to participate in its transformation.

Participants and Purposes

Drawn from a public high school and three middle schools (two public and one private), the students who met with Kozol had never known an education without want. All lived in neighborhoods with few financial and physical resources. They were part of the first generation of Oregonians whose education had taken place after a 1990 property tax limitation initiative forced the vast majority of public school funding to come from state coffers. From their earliest days, all of these students have witnessed contentious and highly public debates within the state legislature-discussions that have had few positive outcomes for children. Children who have attended Portland public schools have experienced increased class sizes, loss of librarians and music and art teachers, and schoolrooms, hallways, and bathrooms that are cleaned irregularly. Since the mid-1990s, they have witnessed everything from a 30,000-person walk-a-thon for education to the possibility of a five-week early closing of schools, averted only when teachers gave up ten days of salary and citizens approved a three-year local income tax (Bailey 2005).

The dismal events of the past 16 years have crippled morale in the three public schools represented in the Kozol conversation, and the mood may plunge even lower (Ambrosio 2004). Each of the schools is on or near the NCLB "failing" list and, with local income tax and other important funding sources set to expire soon, these students learned of further cuts in their buildings just days before meeting with Kozol.

This bleak story also affected the private school students who participated in the conversation. All were African-American or Latino youth from neighborhoods with struggling schools. Many had attended public schools at one time, all had friends attending them and, most importantly, all had firsthand knowledge of the battle for resources in poor communities and their institutions. Their school, with its Catholic affiliation, had its own funding demons with which to wrestle, most notably a lack of funds to satisfy financial claims on the bankrupt Portland archdiocese.

Surveys of students (e.g., Kids First 2003; Boston Plan for Excellence 2004; Goddard, Hoy, and Hoy 2004; Unidos and Unidos 2004) in similar situations to those who participated in this conversation indicated that students are painfully aware of poor facilities, lack of resources, and discrimination in their schools, but that they rarely associate these problems with systemic patterns of racism and segregation. These urban students had ideas about ways to improve individual schools, but their recommendations typically focused on changing the behaviors of teachers, administrators, and students rather than on more widespread causes of injustice. For example, many high school students in the Oakland, California, public schools identified nonfunctional bathrooms as an important school problem, and personal efforts to beautify and maintain their schools as the solution (Kids First 2003). Surveys also suggested that, despite their many ideas, students in urban schools often had little faith in their own ability to bring about change. They frequently cited a lack of respect and attentiveness from adults in schools and a sense that the challenging coursework necessary for them to be successful in college or later life is seldom, if ever, available.

Wednesday, October 10, 2007

Modeling a Successful Model

Successful people have followed certain strategies or models to achieve successes. Whether they are athletes, who always perform at the highest level, or businessmen who own titan companies, or salesmen who always exceed their sales targets every month, what make them achieve all these successes?

For every outcome that is produced, it is a result of an action. To achieve success, one may need to take a lot of different actions so as to find out what is the best action that will produce the best outcome.

Everyone can achieve what they want if they find out what are the exact strategies or actions that successful role models of their desired industry did. By finding out what they did, and applying the same action, you will be able to produce similar results. It will save you the most precious asset of your life, which is time.

You will not need to spend the time on trial and error to find out the right action to take, if you model somebody who has already found it.

We have been subconsciously modeling people since we were young. When we are babies, we tend to model how our parents walked, and we transformed ourselves from the days of crawling to walking. If a child finds that his or her father is very righteous and responsible, the child will most probably adopt the same characteristics of his father when he or she grows up.

Therefore, modeling for excellence is the way to do it. Look for a role model who has achieved everything in your field of industry or interest, and model him or her now.

Planning A Competency Modeling Project

If you and your organization have decided to develop job competency models or a competency-based human resource system, your plan should include answering three key questions that will affect the outcome of the project.

1. What resources do you have to build your models?

There are many ways to do competency models. Some are complex, time-consuming and expensive. Others are not. The trade offs have to do with validation and thoroughness, although the less complex approaches can include a validation step.

If you are doing more than one model, you should consider using an integrated approach that utilizes a common set of building block competencies, customizable for each job. Each model requires five to ten days of an internal or external consultant's time, including facilitation of a focus group of high performers, interviews and model development.

Pick an external consultant to get you started who is willing to transfer their methodology to you and train your staff to carry on the work, and/or have them be trained to build competency models from the start. http://www.workitect.com/building-competency-models.php

For a large retail organization, a consulting firm did the first two models while certifying an internal HR manager to do additional models. The HR manager also designed and implemented selection and performance review applications based on the models. Structured interview questions were developed for each key position to help hiring managers assess and select candidates with the required competencies. Performance goals and results forms were also developed.

2. Where should you start in the organization?

The best way to demonstrate the payoffs of a competency approach is to start with a high impact job or one that is requiring attention, i.e. high turnover, impact on company's sales, etc. Define the measurable outcomes of doing the model and specify applications.

For example, if you want to do a model of a software developer position, include an application of a selection system and interview guide that will allow you to expand the candidate pool and select superior performing software developers. Other applications can be added, but you should start with at least one visible and measurable outcome for the model. If outcomes and applications are not built in, competency modeling may be perceived as a HR exercise without payoffs.

There is a natural tendency to want to start with a low risk, low visibility position, sometimes in order to evaluate the process and the consultant. You are better off doing your homework and thoroughly checking references before selecting a consultant than to waste an opportunity to make an impact that can multiply through out the organization.

The ideal place to start is with a director or executive level position. Getting that group to develop a model for their position assures total buy-in. They have probably already gone through some strategic planning exercises that included thinking about their organization's “core competencies”. Developing a model validates or alters the competencies so that the “ideal” competencies are in fact the competencies required for superior performance in the organization. It also helps them understand the job competency process and align it to the company's strategy. For example, if innovation is a desired core competency, then a “fostering innovation” competency may be included in most models in order to drive the kind of change needed. An executive model is also needed for a good succession planning system.

This is the way a large manufacturing division launched its effort to improve performance and alter its culture. A model was done for division general managers and then cascaded down to other key positions.

3. Should you do one-size-fits-all models or multiple models for multiple jobs?

Some organizations use a generic model for all management positions (one-size-fits-all model). The model may have been one developed externally to cover all management jobs in all industries. Or it may have been developed internally by surveying senior executives asking them what they thought were the key characteristics required for success in their organization. Both approaches are inexpensive to adopt.

The prime disadvantage is lack of validity in a specific organization. The externally developed model may miss several key competencies that may really make the difference between superior and average performance in your unique culture. The internally developed list is often based on opinion and false assumptions and not on hard data. There can also be a communications gap. One CEO insisted that his organization hire and develop people “with a fire in their belly”. He didn't mean finding people with ulcers, but it did take a competency model to validate his opinion and to clearly and concisely describe the qualities of people who were actually successful in that organization.

The opposite end of the spectrum is to do models for every job in an organization (multiple models for multiple jobs), which is costly and unnecessary. Job models are not necessary for every single job in an organization. Jobs can be grouped into like categories or levels. For example, ten different positions in an information systems department may grouped into three levels.

Friday, October 05, 2007

Time Get Out The Mobile Home Remodeling Plans

There are many mobile homes sitting in parks and private land across the country. There may not be as much money in a mobile home as a traditional home, but doing some remodeling projects is still a great idea for many. This article will examine some possible mobile home remodeling ideas for you to consider.

Give Your Bathroom a New Look

One problem that many mobile home owners face is a bathroom that is smaller than they would like. You may be able to expand this space by adding a few shelves that go near the ceiling. You can then possibly put in some mirrors along the walls to give it new look.

Adding a Storage Area

Do you need more space? You could possible add a storage unit to your possibilities on the mobile home remodeling project list. Storage is usually one of the problems that many mobile home owners deal with regularly. You could look at some corner unit to possibly solve this problem. They will be nice looking, but extremely compact for your needs.

Try Adding Some New Flooring Material

Flooring is one of those projects that can really add to a room. Mobile homes are famous for not using the best of flooring material in many of their homes. Perhaps new laminate flooring in the bathroom or kitchen is in order for your mobile home remodeling plans. You should be able to find many ideas at your local flooring supply outlets. Make sure they are aware that you are planning your project for a mobile home remodeling job.

Mobile home remodeling jobs can be easier if you make some solid places before you start. Bathrooms and Kitchens are your two best rooms for making improvements. You may want to stay away from replacing cabinets and stick more to refinishing and flooring or adding attachments.

Home Remodeling Cost And Budget Setting Without Monster-Sized Nightmares

A home remodeling cost and budget setting are important leading steps to take because they help keep you and your funds in shape. As a moneymaking activity, home remodeling is a popular way to boost property value if done correctly backed by a solid research.

As it takes some time and care, projecting your home remodeling cost from start to finish will likely be both exciting and stressful. Possibly, it’ll be less so than having to build a new house and move into an unfamiliar neighborhood.

When you begin to think of remodeling or home improvement, the first thoughts that come to your mind might be …”maaan, it’ll cost a fortune!” Venture out and ask about home remodeling grants that may be available in your area. A real estate office or your town hall officers may be able to help you with that. And since not all home improvements are created equal, begin asking about your housing market.

If you’d like to remodel your home, or even a part of it, make sure that it is still worth the money and efforts. In other words, will your remodeling efforts pay you back the money you’ve put in?

Asking for a professional cost estimate on your home remodeling plan is the key move to make. It may fall between 15% - 20% of the home value. From there, about 40% of an expert remodeling cost is labor. It’s this part of the total budget you can decrease by picking the jobs that are right for you.

However, a general remodeling cost estimate consists of more than just the cost of labor and materials. There are many other reasons to consider before projecting a clear path on your expenses. Contractor’s pay, interest charges, legal fees, permits, extra shipping charges, unexpected specialized trades services, final clean up fees, and delay charges are all examples of such reasons. Be sure to include them in your cost estimate.

Most home renovations, especially improvements like bathrooms and kitchens, have great long-term returns in the areas of quality of life, improved resale value, and lower energy expenses. Depending on your housing market’s economic activity, it may just simply make sense to borrow money for remodeling projects without having to dip deep into your savings. Just know that you can get a good return on investment by doing so.

Bear in mind that a sluggish housing economy may allow to recoup only a portion of your remodeling investment while a booming one may make you smile all the way to the bank.

In some housing markets, it’s common to remodel for a single reason alone – and that is making your property look and feel more attractive to potential buyers. Kitchen and bathroom remodeling alone are the most popular and effective to consider first.

As we can all think of ways to save money on these projects by “doing it yourself”, it is often a misunderstood part. A hasty decision may run the overall cost through the roof. The reasons that stand out and in the way are overestimating one’s abilities, unavailable specialized tools, installations that must be done by licensed trades, and free time we’d have to devote to finishing what we’ve started. Facing problems like these in the middle of the project may indeed run the costs higher than getting professional help first.

Why face it? You can either postpone your remodeling till it makes more sense, or offer to work alongside a professional during your days off to cut costs down.

How about getting extra insurance during your remodeling? Discuss it with your insurance agent if he or she suggests there be one in your case. If it helps you sleep better, so be it.

A well-designed home remodeling plan can save you hoards of time, money, and most of all disappointments and heartache. Feeling strapped emotionally and financially is the last straw anybody wants while home remodeling cost keeps climbing. Use the suggestions above. Consider them all and brainstorm for more. Avoid the common pitfalls and turn your home remodeling into a huge success.