1. Final Project Civil Engineering
  2. Showdown - Game 120 Final Project Mac Os Download
  3. Showdown - Gam 120 Final Project Mac Os Download

Winter 2020

Showdown - gam 120 final project mac os download

The OHRRPGCE (RPG Construction Engine) is a tool for making your own retro 2D RPG games, in a style similar to RPGs like the older Final Fantasy games for the NES, SNES, or GBA. It includes, as an example game, Fenrir-Lunaris's pixel-art masterpiece 'Vikings of Midgard'. Feb 05, 2018 The game is supported on the following Macs. To check your Mac model and when it was released, select About This Mac from the Apple menu on your menu bar. All 21’5” iMacs since late 2013; All 27” iMacs since mid 2012 with a 1GB graphics card or better; All 13” MacBook Pros since late 2016. Furthermore, we recommend to use the PKG installer for Mac OS X because it includes the GAMS Studio and it integrates GAMS into Mac OS X, e.g. It is possible to open the GAMS Studio via the Launchpad. Two installation procedures are available for GAMS on Mac OS X. Feb 07, 2021 From the machines used to test that game, pick the one that resembles your Mac the most. Use the results from that machine as a comparison point. As an example, we’ve used the same 2013 13-inch MacBook Pro and 2016 13-inch MacBook Pro for every one of our Mac Performance Reviews. This is how they run some of the most popular Mac games right now.

Deadlines

Draft of Intro, Machine Description, and CPU Operations:Tuesday, February 4 at 8am (start of class)
Draft of Memory Operations: Thursday, February 20 Tuesday, February 25 at 8am (start of class)
Final report with all measurements plus code: Thursday, March 12 at 8am (start of class)

Overview

In building an operating system, it is important to be able todetermine the performance characteristics of underlying hardwarecomponents (CPU, RAM, disk, network, etc.), and to understand howtheir performance influences or constrains operating system services.Likewise, in building an application, one should understand theperformance of the underlying hardware and operating system, and howthey relate to the user's subjective sense of that application's'responsiveness'. While some of the relevant quantities can be foundin specs and documentation, many must be determined experimentally.While some values may be used to predict others, the relations betweenlower-level and higher-level performance are often subtle andnon-obvious.

In this project, you will create, justify, and apply a set ofexperiments to a system to characterize and understand itsperformance. In addition, you may explore the relations between someof these quantities. In doing so, you will study how to usebenchmarks to usefully characterize a complex system. You should alsogain an intuitive feel for the relative speeds of different basicoperations, which is invaluable in identifying performance bottlenecks.

You have complete choice over the operation system and hardwareplatform for your measurements. You can use your laptop that you arecomfortable with, an operating system running in a virtual machinemonitor, a smartphone, a game system, or even a supercomputer.

You may work either alone or in 2–3 person groups. Groupsdo the same project as individuals. All members receive the samegrade. Note that working in groups may or may not make the projecteasier, depending on how the group interactions work out. Ifcollaboration issues arise, contact your instructor as soon aspossible: flexibility in dealing with such issues decreases as thedeadline approaches.

This project has two parts. First, you will implement and performa series of experiments. Second, you will write a report documentingthe methodology and results of your experiments. When you finish, youwill submit your report as well as the code used to perform yourexperiments.

Report

Your report will have a number of sections including anintroduction, a machine description, and descriptions and discussionsof your experiments.

1) Introduction

Describe the goals of the project and, if you are in a group, whoperformed which experiments. State the language you used to implementyour measurements, and the compiler version and optimization settingsyou used to compile your code. If you are measuring in an unusualenvironment (e.g., virtual machine, Web browser, compute cloud, etc.),discuss the implications of the environment on the measurement task(e.g., additional variance that is difficult for you to control for).Estimate the amount of time you spent on this project.

2) Machine Description

Your report should contain a reasonably detailed description of thetest machine(s). The relevant information should be available eitherfrom the system (e.g., sysctl on BSD, /procon Linux, System Profiler on Mac OS X, the cpuidx86 instruction), or online. Gathering this informationshould not require much work, but in explaining and analyzing yourresults you will find these numbers useful. You should report atleast the following quantities:
  1. Processor: model, cycle time, cache sizes (L1, L2, instruction, data, etc.)
  2. Memory bus
  3. I/O bus
  4. RAM size
  5. Disk: capacity, RPM, controller cache size
  6. Network card speed
  7. Operating system (including version/release)

3) Experiments

Perform your experiments by following these steps:
  1. Estimate the base hardware performance of the operation and citethe source you used to determine this quantity (system info, aparticular document). For example, when measuring disk readperformance for a particular size, you can refer to the diskspecification (easily found online) to determine seek, rotation, andtransfer performance. Based on these values, you can estimate theaverage time to read a given amount of data from the disk assuming nosoftware overheads. For operations where the hardware performancedoes not apply or is difficult to measure (e.g., procedure call),state it as such.
  2. Make a guess as to how much overhead software will add to thebase hardware performance. For a disk read, this overhead willinclude the system call, arranging the read I/O operation, handlingthe completed read, and copying the data read into the user buffer.We will not grade you on your guess, this is for you to test yourintuition. (Obviously you can do this after performing the experimentto derive an accurate 'guess', but where is the fun in that?) For aprocedure call, this overhead will consist of the instructions used tomanage arguments and make the jump. Finally, if you are measuring asystem in an unusual environment (e.g., virtual machine, computecloud, Web browser, etc.), estimate the degree of variability anderror that might be introduced when performing your measurements.
  3. Combine the base hardware performance and your estimateof software overhead into an overall prediction of performance.
  4. Implement and perform the measurement. In all cases, you shouldrun your experiment multiple times, for long enough to obtainrepeatable measurements, and average the results. Also compute thestandard deviation across the measurements. Note that, when measuringan operation using many iterations (e.g., system call overhead),consider each run of iterations as a single trial and compute thestandard deviation across multiple trials (not each individualiteration).
  5. Use a low-overhead mechanism for reading timestamps. All modernprocessors have a cycle counter that applications can read using aspecial instruction(e.g., rdtsc).Searching for 'rdtsc' in Google, for instance, will provide you with aplethora of additional examples. Note, though, that in the modern ageof power-efficient multicore processors, you will need to takeadditional steps to reliably use the cycle counter to measure thepassage of time. You will want to disable dynamically adjusted CPUfrequency (the mechanism will depend on your platform) so that thefrequency at which the processor computes is determinstic and does notvary. Use 'nice' to boost your process priority. Restrictyour measurement programs to using a single core.
In your report:
  1. Clearly explain the methodology of your experiment.
  2. Present your results:
    1. For measurements of single quantities (e.g., system calloverhead), use a table to summarize your results. In the tablereport the base hardware performance, your estimate of softwareoverhead, your prediction of operation time, and your measuredoperation time.
    2. For measurements of operations as a function of some otherquantity, report your results as a graph with operation time on they-axis and the varied quantity on the x-axis. Include your estimatesof base hardware performance and overall prediction of operation timeas curves on the graph as well.
  3. Discuss your results:
    1. Cite the source for the base hardware performance.
    2. Compare the measured performance with the predicted performance.If they are wildly different, speculate on reasons why. Whatmay be contributing to the overhead?
    3. Evaluate the success of your methodology. How accuratedo you think your results are?
    4. For graphs, explain any interesting features of the curves.
    5. Answer any questions specifically mentioned with the operation.
  4. At the end of your report, summarize your results in a table fora complete overview. The columns in your table should include'Operation', 'Base Hardware Performance', 'Estimated SoftwareOverhead', 'Predicted Time', and 'Measured Time'. (Not required forthe draft.)
  5. State the units of all reported values.

Do not underestimate the time it takes to describe your methodologyand results.

4) Operations

  1. CPU, Scheduling, and OS Services
    1. Measurement overhead: Report the overhead of reading time, and report the overhead of using a loop to measure many iterations of an operation.
    2. Procedure call overhead: Report as a function of number of integer arguments from 0-7. What is the increment overhead of an argument?
    3. System call overhead: Report the cost of a minimal system call. How does it compare to the cost of a procedure call? Note that some operating systems will cache the results of some system calls (e.g., idempotent system calls like getpid), so only the first call by a process will actually trap into the OS.
    4. Task creation time: Report the time to create and run both a process and a kernel thread (kernel threads run at user-level, but they are created and managed by the OS; e.g., pthread_create on modern Linux will create a kernel-managed thread). How do they compare?
    5. Context switch time: Report the time to context switch from one process to another, and from one kernel thread to another. How do they compare? In the past students have found using blocking pipes to be useful for forcing context switches. (For insight into why a context switch can be much more expensive than a procedure call, consider the evolution of the Linux kernel trap on x86.)
  2. Memory
    1. RAM access time: Report latency for individual integer accesses to main memory and the L1 and L2 caches. Present results as a graph with the x-axis as the log of the size of the memory region accessed, and the y-axis as the average latency. Note that the lmbench paper is a good reference for this experiment. In terms of the lmbench paper, measure the 'back-to-back-load' latency and report your results in a graph similar to Fig. 1 in the paper. You should not need to use information about the machine or the size of the L1, L2, etc., caches when implementing the experiment; the experiment will reveal these sizes. In your graph, label the places that indicate the different hardware regimes (L1 to L2 transition, etc.).
    2. RAM bandwidth: Report bandwidth for both reading and writing. Use loop unrolling to get more accurate results, and keep in mind the effects of cache line prefetching (e.g., see the lmbench paper).
    3. Page fault service time: Report the time for faulting an entire page from disk (mmap is one useful mechanism). Dividing by the size of a page, how does it compare to the latency of accessing a byte from main memory?
  3. Network
    1. Round trip time. Compare with the time to perform a ping (ICMP requests are handled at kernel level).
    2. Peak bandwidth.
    3. Connection overhead: Report setup and tear-down.

    Evaluate for the TCP protocol. For each quantity, compare both remote and loopback interfaces. Comparing the remote and loopback results, what can you deduce about baseline network performance and the overhead of OS software? For both round trip time and bandwidth, how close to ideal hardware performance do you achieve? What are reasons why the TCP performance does not match ideal hardware performance (e.g., what are the pertinent overheads)? In describing your methodology for the remote case, either provide a machine description for the second machine (as above), or use two identical machines.

  4. File System
    1. Size of file cache: Note that the file cache size is determined by the OS and will be sensitive to other load on the machine; for an application accessing lots of file system data, an OS will use a notable fraction of main memory (GBs) for the file system cache. Report results as a graph whose x-axis is the size of the file being accessed and the y-axis is the average read I/O time. Do not use a system call or utility program to determine this metric except to sanity check.
    2. File read time: Report for both sequential and random access as a function of file size. Discuss the sense in which your 'sequential' access might not be sequential. Ensure that you are not measuring cached data (e.g., use the raw device interface). Report as a graph with a log/log plot with the x-axis the size of the file and y-axis the average per-block time.
    3. Remote file read time: Repeat the previous experiment for a remote file system. What is the 'network penalty' of accessing files over the network? You can either configure your second machine to provide remote file access, or you can perform the experiment on a department machine (e.g., APE lab). On these machines your home directory is mounted over NFS, so accessing a file under your home directory will be a remote file access (although, again, keep in mind file caching effects).
    4. Contention: Report the average time to read one file system block of data as a function of the number of processes simultaneously performing the same operation on different files on the same disk (and not in the file buffer cache).

References

During the quarter you will have read a number of papers describing various system measurements, particularly in the second half of the quarter. You may find those papers on the reading list useful as references.

In addition, other papers you may find useful for help with systemmeasurement are:

  • John K. Ousterhout, WhyAren't Operating Systems Getting Faster as Fast as Hardware?,Proc. of USENIX Summer Conference, pp. 247-256, June 1990.
  • J. Bradley Chen, Yasuhiro Endo, Kee Chan, David Mazières,Antonio Dias, Margo Seltzer, and Michael D. Smith, TheMeasured Performance of Personal Computer Operating Systems,Proc. of ACM SOSP, pp. 299-313, December 1995.
  • Larry McVoy and Carl Staelin, lmbench:Portable Tools for Performance Analysis, Proc. of USENIX AnnualTechnical Conference, January 1996.
  • Aaron B. Brown and Margo I. Seltzer, Operatingsystem benchmarking in the wake of lmbench: a case study of theperformance of NetBSD on the Intel x86 architecture, Proc. of ACM SIGMETRICS, pp. 214-224, June 1997.
  • John Ousterhout, Always Measure One Level Deeper, Communications of the ACM, Vol. 61, No. 7, July 2018, pp. 74–83.
  • Xiang (Jenny) Ren, Kirk Rodrigues, Luyuan Chen, Camilo Vega, Michael Stumm, and Ding Yuan, An Analysis of Performance Evolution of Linux's Core Operations, Proc. of the 27th ACM Symposium on Operating Systems Principles (SOSP 2019), pp. 554–569, October 2019.
  • Gernot Heiser, Systems Benchmarking Crimes.

You may read these papers, or other references, for strategies onperforming measurements, but you may not examine code to copy orreplicate the implementation of a measurement. For example, readingthe lmbench paper is fine, but downloading and looking at the lmbenchcode violates the intent of the project.

Finally, it goes almost without saying that you must implement allof your measurements. You may not download a tool to perform themeasurements for you.

Grading

We will grade your project on the relative accuracy of yourmeasurement results (disk reads performing faster than the buffercache are a bad sign) as well as the quality of your report in termsof methodology description (can we understand what you did and why?),discussion of results (answering specific questions, discussingunexpected behavior), and the writing (lazy writing will hurt your grade).

In the past, a frequent issue we see with project reports is thatthey do not clearly explain the reasoning behind the estimates,methodology, results, etc. As a result, we do not fully understandwhat you did and why you did it that way. Be sure to explain yourreasoning as well.

As a first stage of the project, we would like you to submit anearly draft of the first part of the project. What should you coverin the draft? The first two parts of the report (Introduction and Machine Description), and the first set ofoperations (CPU, Scheduling, andOS Services). For this step only submit a draft of the report,not your code.

What percentage of the project grade does it form? It will onlybe worth 5% of your grade. Why so little? The idea with the initialdraft is that it is primarily for your own benefit: it will get youstarted on the project early, and it will give you a sense for howlong it will take you to complete the project by the end of thequarter (in the past, students have reported that it has taken them40-120 hours on the project). As a result, you should be able tobetter budget your time as the end of the quarter arrives. How roughcan the draft be? Your call — again, this is primarily for yourbenefit.

Final Project Civil Engineering

In the second stage of the project, extend your report draft withresults for the second set of operations (Memory). Please also hand in your gradedreport from the first stage for reference.

Showdown - Game 120 Final Project Mac Os Download

For the drafts, bring one hardcopy per group with you to class onthe deadline.

Showdown - Gam 120 Final Project Mac Os Download

For the final project reports, submit them in pdf to me and the TA via email. Also submit your code to us via email as well, packaged either as a tar.gz or a zip file.