Geometry.Net - the online learning center
Home  - Basic_P - Parallel Computing Programming

e99.com Bookstore
  
Images 
Newsgroups
Page 2     21-40 of 109    Back | 1  | 2  | 3  | 4  | 5  | 6  | Next 20
A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

         Parallel Computing Programming:     more books (100)
  1. Introduction to Parallel Computing (2nd Edition) by Ananth Grama, George Karypis, et all 2003-01-26
  2. Parallel Scientific Computing in C++ and MPI: A Seamless Approach to Parallel Algorithms and their Implementation by George Em Karniadakis, Robert M. Kirby II, 2003-06-16
  3. Parallel Computing: Theory and Practice by Michael J. Quinn, 1993-09-01
  4. Parallel Programming: for Multicore and Cluster Systems by Thomas Rauber, Gudula Rünger, 2010-03-10
  5. Professional Parallel Programming with C#: Master Parallel Extensions with .NET 4 by Gaston Hillar, 2011-01-11
  6. Scientific Parallel Computing by L. Ridgway Scott, Terry Clark, et all 2005-03-28
  7. Introduction to Parallel Computing: Design and Analysis of Parallel Algorithms by Vipin Kumar, Ananth Grama, et all 1994-01
  8. Principles of Parallel Programming by Calvin Lin, Larry Snyder, 2008-03-07
  9. Parallel Computing: Numerics, Applications, and Trends
  10. Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers (2nd Edition) by Barry Wilkinson, Michael Allen, 2004-03-14
  11. The Sourcebook of Parallel Computing (The Morgan Kaufmann Series in Computer Architecture and Design)
  12. CUDA by Example: An Introduction to General-Purpose GPU Programming by Jason Sanders, Edward Kandrot, 2010-07-29
  13. Patterns for Parallel Programming by Timothy G. Mattson, Beverly A. Sanders, et all 2004-09-25
  14. An Introduction to Parallel Programming by Peter Pacheco, 2011-01-18

21. SAL- Parallel Computing - Programming Languages & Systems - ACe
None. User Comments None. See A Screen Shot? (Not Yet). SAL Home parallel computing programming Languages Systems Comments? SAL
http://sal.kachinatech.com/C/1/ACE.html
aCe aCe is a data-parallel computing environment designed to improve the adaptability of algorithms to diverse architectures. The primary purpose of aCe is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. Current Version: License Type: Home Site:
http://newton.gsfc.nasa.gov/aCe Source Code Availability: Yes (with registration) Available Binary Packages:
  • Debian Package: No
  • RedHat RPM Package: No
  • Other Packages: No
Targeted Platforms:
Linux, SGI Indigo, MasPar MP-1/2, Cray T3E Software/Hardware Requirements:
PVM for multi-CPU release Other Links:
None Mailing Lists/USENET News Groups:
None User Comments:
  • None
See A Screen Shot? (Not Yet) SAL Home Parallel Computing Comments?

22. Department Of Computing Science
Department of computing Science. Research areas include algorithmics, artificial intelligence, communication networks, computer graphics, computer vision and robotics, database systems, multimedia, parallel programming systems, and software engineering.
http://www.cs.ualberta.ca/
Feedback Site Map FAQ Search ...
Alumni and Sponsors

Resources

Instructional Support
Systems Support Documentation
Local Interest Only

Proud host of Conferences about artificial intelligence
and database mining News:
April 10 Bruce Porter , from the Department of ComputerScience, University of Texas, Austin, is giving an Artificial Intelligence Seminar titled Knowledge Systems and Project Halo tomorrow at 12:00 noon in CSC 333. Free pizza! Sadaf Ahmed is giving a Graduate Student Seminar titled Evaluation of BizTalk Server for Integration of Web Services on Monday April 14 at 11:00 am in CSC 333. Click to display upcoming events Search the Site:
Search for a Person:
University of Alberta Faculty of Science
Visitor number:

23. Designing And Building Parallel Programs
Designing and Building parallel Programs (Online) integrates four resourcesconcerned with parallel programming and parallel computing
http://www.mcs.anl.gov/dbpp/
Designing and Building Parallel Programs , by Ian Foster Designing and Building Parallel Programs (Online) is an innovative traditional print and online resource publishing project. It incorporates the content of a textbook published by Addison-Wesley into an evolving online resource. Here is a description of the book , and here is the table of contents . See also the list of mirror sites around the world Designing and Building Parallel Programs (Online) integrates four resources concerned with parallel programming and parallel computing: We have prepared and presented a very successful full-day tutorial based on Designing and Building Parallel Programs. Let us know if you are interested in seeing this presented elsewhere. Read about what's new on DBPP Online. There are also a few errata The content of Designing and Programming Parallel Programs may not be archived or reproduced without written permission Designing and Programming Parallel Programs is available wherever fine technical books are sold, or

24. Wolfgang Schreiner
Johannes Kepler University parallel and distributed computing, generic programming, semantics of programming languages, parallel functional languages, symbolic and algebraic computation.
http://www.risc.uni-linz.ac.at/people/schreine/
Wolfgang Schreiner
Research Institute for Symbolic Computation (RISC-Linz)
Johannes Kepler University

A-4040 Linz, Austria, Europe
Email: Wolfgang.Schreiner@risc.uni-linz.ac.at
URL: http://www.risc.uni-linz.ac.at/people/schreine
PGP Public Key
Bookmarks Home Page at CBL ...
  • Talks
    Research
    Parallel and Distributed Computing at RISC-Linz
    Symbolic Programming
    at RISC-Linz ...
    Brokering Distributed Mathematical Services
    The goal of this project is the development of a framework for brokering mathematical services that are distributed among networked servers. The foundation of this framework is a language for describing the mathematical problems solved by the services.
    Distributed Maple
    Distributed Maple is a system for writing parallel programs in the computer algebra system Maple based on a communication and scheduling mechanism implemented in Java.
    Integrating Temporal Specifications as Runtime Assertions into Parallel Debugging Tools
    This project pursues the integration of formal methods with tools for the debugging of parallel message passing programs. The idea is to generate from temporal logic specifications executable assertions that can be checked in the various states of parallel program execution.
    Distributed Constraint Solving for Functional Logic Programming
    I am the technical leader of a research project on the development of a distributed constraint solving system based on a functional logic language.
  • 25. Addison-Wesley
    This book teaches the fundamental concepts of multithreaded, parallel and distributed computing. Emphasizes how to solve problems, with correctness the primary concern and performance an important, but secondary, concern. (Gregory R. Andrews)
    http://cseng.aw.com/book/0,3828,0201357526,00.html
    Search this site: Search by Title: Search by Author: Search by ISBN:
    Book Recognition Jolt Award Winners
    Addison-Wesley Professional is proud to announce the winners of the Software Development Magazine Jolt Award s.Congratulations to the following books for their recognition in these prestigious awards In the Technical Book Category:
    Productivity Award Winner:
    Understanding Web Services
    by Eric Newcomer In the General Book Category:
    Productivity Award Winners:
    Documenting Software Architectures: Views and Beyond
    by Paul Clements, et al
    Patterns of Enterprise Application Architecture
    by Martin Fowler
    Test-Driven Development: By Example
    by Kent Beck For the past 13 years, the Software Development Magazine Jolt Product Excellence and Productivity Awards have recognized technical books that have "jolted" the industry, making the task of creating software faster, easier, and more efficient. Jolt cola, the fabled soft drink quaffed by software programmers for sustenance during development project marathons, sponsors the annual awards presentation. Past Jolt and Productivity Award winners include Bertrand Meyer's Object-Oriented Software Construction, Gamma et al's Design Patterns, and Fowler/Scott's UML Distilled.

    26. Nan's Parallel Computing Page
    parallel computing (Maui High Performance computing Center) SP parallel programming Workshop (Maui High Performance
    http://www.cs.rit.edu/~ncs/parallel.html
    Nan's Parallel Computing Page
    This list contains links related to parallel computing. If you have any suggestions please send me e-mail: Please note that an ' XXX ' at the end of a line means that I have recently (see the date at bottom of page) had trouble getting there. I am working on fixing/deleting these links. Nan Schaller's Interests Page e-mail: Nan Schaller's Home Page RIT's Home Page
    Links Go

    Parallel Computers
    Odyssey FAQ
    The sC++ language - Synchronous Java
    Applied Parallel Research, Inc.
    ARCH Library ...
    Tools for CSP
    Cluster Computing
    MOSIX - Scalable Cluster Computing for Linux (Hebrew Univ.)
    Appleseed-parallel Macintosh Cluster
    IEEE CS Task Force on Cluster Computing
    Kaláka, Distributed system for high performance parallel computing (Univ. of Auckland, NZ)
    SCL Cluster Cookbook
    Java for Parallel/High Performance Computing
    Concordia Home Page
    Concurrent Programming Using the Java Language
    concurrency: State Models and Java Programs
    Infosphere Infrastructure - Current Release ...
    CTJ - Communicating Threads in Java (NL)
    Java(tm) Distributed Computing
    Java Parallel
    JavaOne Session Presentations
    TOPIC Links and Pages ...
    http://wwwipd.ira.uka.de/JavaParty/

    27. Applications
    Information about the computing cluster at the Birla Institute of Technology and Science and a parallel programming tutorial.
    http://bitscap.bits-pilani.ac.in/param/index.html
    Welcome to the Param Homepage
    PARAM 10000 -BITS Pilani
    ......site under construction!!

    28. 15-849C: Parallel Computing
    computing including parallel languages and parallel algorithms. The class willlook at both theoretical and practical issues, and will include programming
    http://www.cs.cmu.edu/~guyb/parcomp98.html
    15-849C: Parallel Computing
    Instructor: Guy E. Blelloch Credit: 1 Graduate Core Unit or 12 University units Time and place: M-W 10:30-11:50, Wean 4615A This course will cover various topics in parallel computing including parallel languages and parallel algorithms. The class will look at both theoretical and practical issues, and will include programming assignments on various parallel machines. The course should be appropriate for graduate students in all areas and for advanced undergraduates.
    Topics to be covered:

    29. Cornell Multitask Toolbox
    commercial CMTM provides a new, userfriendly set of development programming tools that extends the power of MATLAB to parallel computing.
    http://www.tc.cornell.edu/Services/Software/CMTM/

    30. LAM / MPI:
    Defines the Local Area Multicomputer which uses MPI in parallel computing. Read a user survey and find available downloads. LAM / MPI parallel computing. LAM web site navigation LAM (Local Area Multicomputer) is an MPI programming environment and development system for heterogeneous computers on a
    http://www.mpi.nd.edu/lam
    LAM / MPI Parallel Computing
    LAM web site navigation
    Mirror sites
    LAM home
    • LAM information
    • "Powered by" buttons ...
      3rd party packages

      Search entire web site The latest stable version of LAM/MPI is 6.5.9. If you are using a version prior to that, you are strongly encouraged to upgrade immediately. LAM (Local Area Multicomputer) is an MPI programming environment and development system for heterogeneous computers on a network. With LAM, a dedicated cluster or an existing network computing infrastructure can act as one parallel computer solving one problem. LAM features extensive debugging support in the application development cycle and peak performance for production applications. LAM features a full implementation of the MPI communication standard.
      Obtaining LAM/MPI
      There are currently three different versions of LAM/MPI available for download. The 6.5.9 release has been heavily tested and is considered stable. The other two versions are for user that feel a need to be on the bleeding edge or need a bug fix that has not yet been integrated into the stable release.
      • LAM 6.5.9

    31. HLRS - Services - Parallel Computing - Programming Models - MPI
    Courses on computational science and parallel programming from the Edinburgh parallelcomputing Centre (EPCC) Training and Education Centre (including MPI1
    http://www.hlrs.de/organization/par/services/models/mpi/
    MPI stands for Message Passing Interface and is the standard for message passing as defined by the MPI-Forum.
    Information about MPI at RUS and at HLRS MPI services
    Userguidelines and hotline service for any questions about the MPI installations on the platforms noted below -> use links at the platforms.
    Automatic Counter Profiling on T3E with a Weekly Mail

    MPI Benchmark Service

    Platforms at HWW (accessible via HLRS) on which MPI is available
    MPI on the CRAY T3E/512
    MPI on the NEC SX-4 and SX-5

    MPI on the Hitachi SR 8000

    Platforms at HLRS on which MPI is available MPI on the Hitachi SR2201
    MPI on the intel Paragon
    MPT and MPICH on the SGI Onyx2 (Vision) Platforms at URS/RUS on which MPI is available MPI on Common Ulm and Stuttgart Server (CUSS) MPI on the IBM RS/6000 SP MPI on Clusters of Workstations MPI on PCs under NT MPI standard documents The MPI 1.1. standard document is available as mpi-11.ps.gz (355757 bytes) or mpi-11.ps.Z (506895 bytes) or mpi-11.pdf (957443 bytes, unofficial, bad font-handling) The MPI 2.0 standard document contains the standards MPI

    32. Chilean Computing Week, Punta Arenas, Nov. 5-9, 2001
    Including the following events XXI International Conference of the Chilean Computer Science Society; IX Chilean Congress on computing; V Workshop on parallel and distributed systems; III Congress on Higher Education in Computer Science; II Workshop on Artificial Intelligence; I Workshop on Software Engineering; ACM SouthAmerican Region programming Contest; Tutorials and invited talks. University of Magellan, Punta Arenas, Chile; 59 November 2001.
    http://www.dcc.uchile.cl/~mmarin/sccc/english.html

    33. ParaScope
    Inspired Distributed computing (NIDISC 2003). Workshop on Formal Methods for parallel programming Theory and
    http://www.computer.org/parascope
    IEEE Computer Society's
    P a r a S c o p e
    A Listing of Parallel Computing Sites
    maintained by David A. Bader
    Topics:
    New and Updated Links:
    October 25, 2002

    34. HLRS_Services - Parallel Computing - Programming Models - MPI - Effective I/O Ba
    SPEC Workshop on Benchmarking parallel and HighPerformance ComputingSystems (copy of the slides), Wuppertal, Germany, Sept. 13, 1999.
    http://www.hlrs.de/organization/par/services/models/mpi/b_eff_io/
    covers two goals:
  • achieve a characteristic average number for the I/O bandwidth achievable with parallel MPI-I/O applications get detailed information about several access patterns and buffer lengths. The benchmark examines "first write", "rewrite" and "read" access, strided (individual and shared pointers) and segmented collective patterns on one file per application and non-collective access to one file per process. The number of parallel accessing processes is also varied and wellformed I/O is compared with non-wellformed. On systems, meeting the rule that the total memory can be written to disk in 10 minutes, the benchmark should not need more than 15 minutes for a first pass of all patterns. The benchmark is designed analogously to the effective bandwidth benchmark for message passing ( ) that characterizes the message passing capabilities of a system in a few minutes.

  • The latest releases

    35. Distributed Systems Laboratory
    Research focus includes programming support for parallel and distributed computing, quality of service, and security.
    http://www-fp.mcs.anl.gov/dsl/index.html
    Distributed Systems Laboratory
    Argonne National Laboratory
    University of Chicago The Distributed Systems Laboratory (DSL) is a research and software development group within the Mathematics and Computer Science (MCS) Division at Argonne National Laboratory and the Department of Computer Science at The University of Chicago A Grid is a persistent infrastructure that supports computation-intensive and data-intensive collaborative activities, especially when these activities span organizations. Grid computing facilitates the formation of "Virtual Organizations" for shared use of distributed computational resources. Under the leadership of Dr. Ian Foster , the DSL hosts research and development activities designed to realize the potential of "Grids" for computational science and engineering. Together with Carl Kesselman's Center for Grid Technologies at the University of Southern California Information Sciences Institute , we are co-founders of the Globus Project TM , a highly-collaborative international and multidisciplinary effort to make Grid computing a reality.

    36. PVM: Parallel Virtual Machine
    as an educational tool to teach parallel programming. With tens of thousands of users,PVM has become the de facto standard for distributed computing worldwide
    http://www.csm.ornl.gov/pvm/pvm_home.html
    PVM (Parallel Virtual Machine) is a software package that permits a heterogeneous collection of Unix and/or Windows computers hooked together by a network to be used as a single large parallel computer. Thus large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers. The software is very portable. The source, which is available free thru netlib, has been compiled on everything from laptops to CRAYs. PVM enables users to exploit their existing computer hardware to solve much larger problems at minimal additional cost. Hundreds of sites around the world are using PVM to solve important scientific, industrial, and medical problems in addition to PVM's use as an educational tool to teach parallel programming. With tens of thousands of users, PVM has become the de facto standard for distributed computing world-wide. For those who need to know, PVM is Y2K compliant. PVM does not use the date anywhere in its internals.
    Current PVM News:

    37. Rajkumar Buyya: Page Moved
    Monash University Computer Architecture, Operating Systems, Compilers, programming Paradigms, parallel and Distributed computing, Cluster computing, parallel I/O.
    http://www.rdt.monash.edu.au/~rajkumar/
    This page has moved from Monash to the University of Melbourne
    The new URL of this page is http://www.cs.mu.oz.au/~raj/ Your browser should automatically be redirected there in 5 seconds. Please update your link.

    38. PARALLEL COMPUTING LAB AT NMSU
    At New Mexico State University. Configuration information and diagrams, research, publications, and links to parallel programming information.
    http://www.cs.nmsu.edu/pcl/

    39. Computer Science Departement - Bordeaux 1 University
    Computer Science Department. Research areas include combinatorics, algorithmics, logic, automata, parallel computing, symbolic programming, and graphics.
    http://dept-info.labri.u-bordeaux.fr/ANGLAIS/
    Université Bordeaux I
    UFR Mathématiques et Informatique
    Computer Science Department
    The Department
    Programs

    40. Department Of Computer Science, University College Cork, Ireland
    Department of Computer Science. Research areas Algorithms, Unified computing, Computer Communications, Security, Computer Simulation, Constraint Based Reasoning, Digital Video Compression, Expert Systems, Intelligent Information Agents, Neural Networks, ObjectOriented Database Systems, Distributed and parallel Processing, Semantics of programming languages, Theory and Formal Methods.
    http://www.cs.ucc.ie/
    Quick Search Prospective Undergraduate / Graduate Students
    Contacts

    Committees

    Vacancies
    ...
    Boole Centre for Research Informatics
    (BCRI)
    Centre for Efficiency Oriented Languages
    (CEOL)
    Cork Constraint Computation Centre

    Centre for Unified Computing
    (CUC)
    ISPDC 2003

    UCC occupies a unique place in the history of Information Technology. Boolean algebra, which provides the mathematical basis for computer design, was named after George Boole the first Professor of Mathematics at UCC. Today the Department of Computer Science is one of the largest and fastest growing of the academic departments within University College Cork
    News
    • 24-Mar-2003: High performance computing cluster installed ( more... 24-Mar-2003: Centre for Efficiency Oriented Languages ( more...

    External Interest Internal Only External Interest Internal Only Department of Computer Science The Kane Building University College Cork College Road, Cork, Ireland Phone: +353-(0)21-490-2795 Fax: +353-(0)21-427-4390 secretary@cs.ucc.ie webmaster@cs.ucc.ie

    A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

    Page 2     21-40 of 109    Back | 1  | 2  | 3  | 4  | 5  | 6  | Next 20

    free hit counter