✨ 🥜 UNSW Memes 🥜 ✨
[[TOC]]
- Demonstrate effective use of software development tools to build full-stack end-user applications.
- Demonstrate effective use of static testing, dynamic testing, and user testing to validate and verify software systems.
- Understand key characteristics of a functioning team in terms of understanding professional expectations, maintaining healthy relationships, and managing conflict.
- Demonstrate an ability to analyse complex software systems in terms of their data model, state model, and more.
- Understand the software engineering life cycle in the context of modern and iterative software development practices in order to elicit requirements, design systems thoughtfully, and implement software correctly.
- Demonstrate an understanding of how to use version control and continuous integration to sustainably integrate code from multiple parties.
UNSW's revenue has been going down, despite the absolutely perfect MyExperience feedback.
Realising the bright potential of its students to recreate existing products they pay for, UNSW has tasked me and my army of COMP1531 students with recreating Microsoft Teams.
The 23T1 cohort of COMP1531 students will build the backend Javascript server for a new communication platform, UNSW Memes (or just Memes for short). We plan to task future COMP6080 students to build the frontend for Memes, something you won't have to worry about.
UNSW Memes is the questionably-named communication tool that allows you to share, communicate, and collaborate virtually on a meme-like budget.
We have already specified a common interface for the frontend and backend to operate on. This allows both courses to go off and do their own development and testing under the assumption that both parties will comply with the common interface. This is the interface you are required to use.
The specific capabilities that need to be built for this project are described in the interface at the bottom. This is clearly a lot of features, but not all of them are to be implemented at once.
UNSW thanks you for doing your part in saving them approximately $100 per student, per year, despite making you pay for this course. 😊
(For legal reasons, this is a joke).
Now complete!
Watch the iteration 1 introductory video here. Please note that this video was recorded in 22T2. You should consult this spec for minor changes.
In this iteration, you are expected to:
-
Write tests for and implement the basic functionality of Memes. The basic functionality is defined as the
auth*,channel*,channels*,users*,other*capabilities/functions, as per the interface section below.- Test files you add should all be in the form
*.test.js. - Do NOT attempt to try and write or start a web server. Don't overthink how these functions are meant to connect to a frontend yet. That is for the next iteration. In this iteration you are just focusing on the basic backend functionality.
- Test files you add should all be in the form
-
Write down any assumptions that you feel you are making in your interpretation of the specification.
- The
assumptions.mdfile described above should be in the root of your repository. If you've not written markdown before (we assume most of you haven't), it's not necessary to research the format. Markdown is essentially plain text with a few extra features for basic formatting. You can just stick with plain text if you find that easier. - We will only be marking the quality of SIX of your assumptions. You can indicate which ones you would like marked, otherwise we will look at the first six.
- The
-
Follow best practices for git, project management, and effective teamwork, as discussed in lectures.
-
The marking will be heavily biased toward how well you follow good practices and work together as a team. Just having a "working" solution at the end is not, on its own, sufficient to even get a passing mark.
-
You need to use the GitLab Issue Boards for your task tracking and allocation. Spend some time getting to know how to use the taskboard. If you would like to use another collaborative task tracker e.g. Jira, Trello, Airtable, etc. you must first get approval from your tutor and grant them administrator access to your team board.
-
You are expected to meet regularly with your group and document the meetings via meeting minutes, which should be stored at a timestamped location in your repo (e.g. uploading a word doc/pdf or writing in the GitLab repo Wiki after each meeting).
-
You should have regular standups and be able to demonstrate evidence of this to your tutor.
-
For this iteration, you will need to collectively make a minimum of 12 merge requests into
master.
-
You should first approach this project by considering its distinct "features". Each feature should add some meaningful functionality to the project, but still be as small as possible. You should aim to size features as the smallest amount of functionality that adds value without making the project more unstable. For each feature you should:
- Create a new branch.
- Write tests for that feature and commit them to the branch. These will fail as you have not yet implemented the feature.
- Implement that feature.
- Make any changes to the tests such that they pass with the given implementation. You should not have to do a lot here. If you find that you are, you're not spending enough time on your tests.
- Consider any assumptions you made in the previous steps and add them to
assumptions.md. - Create a merge request for the branch.
- Get someone in your team who did not work on the feature to review the merge request.
- Fix any issues identified in the review.
- Merge the merge request into master.
For this project, a feature is typically sized somewhere between a single function, and a whole file of functions (e.g. auth.js). It is up to you and your team to decide what each feature is.
There is no requirement that each feature is implemented by only one person. In fact, we encourage you to work together closely on features, especially to help those who may still be coming to grips with Javascript.
Please pay careful attention to the following:
- We want to see evidence that you wrote your tests before writing the implementation. As noted above, the commits containing your initial tests should appear before your implementation for every feature branch. If we don't see this evidence, we will assume you did not write your tests first and your mark will be reduced.
- Merging in merge requests with failing tests is very bad practice. Not only does this interfere with your team's ability to work on different features at the same time, and thus slow down development, it is something you will be penalised for in marking.
- Similarly, merging in branches with untested features is also very bad practice. We will assume, and you should too, that any code without tests does not work.
- Pushing directly to
masteris not possible for this repo. The only way to get code intomasteris via a merge request. If you discover you have a bug inmasterthat got through testing, create a bugfix branch and merge that in via a merge request. - As is the case with any system or functionality, there will be some things that you can test extensively, some things that you can test sparsely/fleetingly, and some things that you can't meaningfully test at all. You should aim to test as extensively as you can, and make judgements as to what things fall into what categories.
The tests you write should be as small and independent as possible. This makes it easier to identify why a particular test may be failing. Similarly, try to make it clear what each test is testing for. Meaningful test names and documentation help with this. An example of how to structure tests has been done in:
src/echo.jssrc/echo.test.js
The echo functionality is tested, both for correct behaviour and for failing behaviour. As echo is relatively simple functionality, only 2 tests are required. For the larger features, you will need many tests to account for many different behaviours.
The files from iteration 0 should be developed with actual implementations, in addition to the new other.js and users.js files:
auth.jschannel.jschannels.jsusers.jsother.js
The userProfileV1 function should be included in users.js, and the clearV1 function should be included in other.js.
Elements of securely storing passwords and other tricky authorisation methods are not required for iteration 1. You can simply store passwords plainly, and use the user ID to identify each user. We will discuss ways to improve the quality and methods of these capabilities in the later iterations.
Note that the authUserId variable is simply the user ID of the user who is making the function call. For example,
- A user registers an account with UNSW Memes and is assigned some integer ID, e.g.
42as their user ID. - When they make subsequent calls to functions, their user ID - in this case,
42- is passed in as theauthUserIdargument.
Since authUserId refers to the user ID of the user calling the functions, you do NOT need to store separate user IDs (e.g. a uId or userId + a authUserId) to identify each user in your data structure - you only need to store one user ID. How you name this user ID property in your data structure is up to you.
To test basic functionality you will likely need code like:
let result = authRegisterV1('validemail@gmail.com', '123abc!@#', 'Jake', 'Renzella')
authLoginV1('validemail@gmail.com', '123abc!@#') // Expect to work since we registeredHowever, when deciding how to structure your tests, keep in mind the following:
-
Your tests should be black box unit tests.
- Black box means they should not depend your specific implementation, but rather work with any working implementation. You should design your tests such that if they were run against another group's backend they would still pass.
- For iteration 1, you should not be importing the
dataobject itself or directly accessing it via thegetorsetfunctions fromsrc/dataStore.jsinside your tests. - Unit tests mean the tests focus on testing particular functions, rather than the system as a whole. Certain unit tests will depend on other tests succeeding. It's OK to write tests that are only a valid test if other functions are correct (e.g. to test
channelfunctions you can assume thatauthis implemented correctly).
-
Avoid writing your tests such that they need to be run in a particular order. That can make it hard to identify what exactly is failing.
-
You should reset the state of the application (e.g. deleting all users, channels, messages, etc.) at the start of every test. That way you know none of them are accidentally dependent on an earlier test. You can use a function for this that is run at the beginning of each test (hint:
clearV1). -
If you find yourself needing similar code at the start of a series of tests, consider using Jest's beforeEach to avoid repetition.
Nearly all of the functions will likely have to reference some "data source" to store information. E.G. If you register two users, create two channels, and then add a user to a channel, all of that information needs to be "stored" somewhere. The most important thing for iteration 1 is not to overthink this problem.
Firstly, you should not use an SQL database, or something like firebase.
Secondly, you don't need to make anything persist. What that means is that if you run all your tests, and then run them again later, it's OK for the data to be "fresh" each time you run the tests. We will cover persistence in another iteration.
Inside src/dataStore.js we have provided you with an object called data which will contain the information that you will need to access across multiple functions. An explanation of how to get and set the data is in dataStore.js. You will need to determine the internal structure of the object. If you wish, you are allowed to modify this data structure.
For example, you could define a structure in a file that is empty, and as functions are called, the structure populates and fills up like the one below:
let data = {
'users': [
{
'id': 1,
'name' : 'user1',
},
{
'id': 2,
'name' : 'user2',
},
],
'channels': [
{
'id': 1,
'name' : 'channel1',
},
{
'id': 2,
'name' : 'channel2',
},
],
}We have provided a very simple dryrun for iteration 1 consisting of 4 tests, one each for your implementation of clearV1, authRegisterV1, channelsCreateV1, and channelsListV1. These only check the format of your return types and simple expected behaviour, so do not rely on these as an indicator of the correctness of your implementation or tests.
To run the dryrun, you should be on a CSE machine (i.e. using VLAB or ssh'ed into CSE) and in the root directory of your project (e.g. /project-backend) and use the command:
1531 dryrun 1Tips to ensure dryrun runs successfully:
- Files used for imports are appended with
.jse.g.import { clearV1 } from './other.js'; - Files sit within the
/srcdirectory
Here are a few examples of bad assumptions:
- Assume that all groups store their data in a field called data which is located in dataStore.js
- Assume all individual return values are returned as single values rather than being stored in an object
- Assume the functions are written correctly
- Assume the input authUserId is valid
Bad assumptions are usually ones that directly contradict an explicit or implicit requirement in the specification. Good assumptions are ones that fill holes or gaps in requirements.
Avoid "assumptions" that simply describe the implementation details irrelevant to the client, e.g. a particular method of ID generation. Instead, consider the scenarios in which the expected behaviour of Memes is not addressed clearly in the spec and document, with reasoning, your assumptions regarding such scenarios.
This iteration provides challenges for many groups when it comes to working in parallel. Your group's initial reaction will be that you need to complete registration before you can complete channel creation, and then channel creation must be done before you can invite users into channels, etc.
There are several approaches that you can consider to overcome these challenges:
- Have people working on down-stream tasks (like the channels implementation) work with stubbed versions of the up-stream tasks. E.g. The register function is stubbed to return a successful dummy response, and therefore two people can start work in parallel.
- Co-ordinate with your team to ensure prerequisite features are completed first (e.g. Giuliana completes
authRegisteron Monday meaning Hayden can startchannelsCreateon Tuesday). - You can pull any other remote branch into your own using the command
git pull origin <branch_name>.- This can be helpful when two people are working on functions on separate branches where one function is a prerequisite of the other, and an implementation is required to keep the pipeline passing.
- You should pull from
masteron a regular basis to ensure your code remains up-to-date.
| Section | Weighting | Criteria |
|---|---|---|
| Automarking (Testing & Implementation) | 40% |
|
| Code Quality | 25% |
|
| Git Practices | 15% |
|
| Project Management & Teamwork | 15% |
|
| Assumptions markdown file | 5% |
|
For this and for all future milestones, you should consider the other expectations as outlined in section 6 below.
The formula used for automarking in this iteration is:
Mark = t * i (Mark equals t multiplied by i)
Where:
tis the mark you receive for your tests running against your code (100% = your implementation passes all of your tests)iis the mark you receive for our course tests (hidden) running against your code (100% = your implementation passes all of our tests)
This iteration's due date and demonstration week are stated in section 5. You will demonstrate this submission in line with the information provided in section 5.
Reference 6.5.
These interface specifications come from Hayden & COMP6080, who are building the frontend to the requirements set out below.
| Variable name | Type |
|---|---|
| named exactly email | string |
| has suffix id | integer |
| contains substring password | string |
| named exactly message | string |
| named exactly start | integer |
| contains substring name | string |
| has prefix is | boolean |
| Variable name | Type |
|---|---|
| named exactly email | string |
| has suffix id | integer |
| named exactly length | integer |
| named exactly start | integer |
| contains substring password | string |
| named exactly message | string |
| contains substring name | string |
| has prefix is | boolean |
| has prefix time | integer (unix timestamp in seconds), [check this out] |
| (outputs only) named exactly messages | Array of objects, where each object contains types { messageId, uId, message, timeSent } |
| (outputs only) named exactly channels | Array of objects, where each object contains types { channelId, name } |
| has suffix Str | string |
| (outputs only) named exactly user | Object containing uId, email, nameFirst, nameLast, handleStr |
| (outputs only) name ends in members | Array of objects, where each object contains types of user |
| (outputs only) named exactly users | Array of objects, where each object contains types of user |
All return values should be an object, with keys identically matching the names in the table below, along with their respective values.
All error cases should return {error: 'error'}, where the error message in quotation marks can be anything you like (this will not be marked).
| Name & Description | Data Types | Error returns |
|---|---|---|
authLoginV1Given a registered user's email and password, returns their authUserId value. |
Parameters:( email, password )Return type if no error: { authUserId } |
Return object {error: 'error'} when any of:
|
authRegisterV1Given a user's first and last name, email address, and password, creates a new account for them and returns a new authUserId.A unique handle will be generated for each registered user. The user handle is created as follows:
|
Parameters:( email, password, nameFirst, nameLast )Return type if no error: { authUserId } |
Return object {error: 'error'} when any of:
|
channelsCreateV1Creates a new channel with the given name, that is either a public or private channel. The user who created it automatically joins the channel. |
Parameters:( authUserId, name, isPublic )Return type if no error: { channelId } |
Return object {error: 'error'} when any of:
|
channelsListV1Provides an array of all channels (and their associated details) that the authorised user is part of. |
Parameters:( authUserId )Return type if no error: { channels } |
Return object {error: 'error'} when any of:
|
channelsListAllV1Provides an array of all channels, including private channels (and their associated details) |
Parameters:( authUserId )Return type if no error: { channels } |
Return object {error: 'error'} when any of:
|
channelDetailsV1Given a channel with ID channelId that the authorised user is a member of, provides basic details about the channel. |
Parameters:( authUserId, channelId )Return type if no error: { name, isPublic, ownerMembers, allMembers } |
Return object {error: 'error'} when any of:
|
channelJoinV1Given a channelId of a channel that the authorised user can join, adds them to that channel. |
Parameters:( authUserId, channelId )Return type if no error: {} |
Return object {error: 'error'} when any of:
|
channelInviteV1Invites a user with ID uId to join a channel with ID channelId. Once invited, the user is added to the channel immediately. In both public and private channels, all members are able to invite users. |
Parameters:( authUserId, channelId, uId )Return type if no error: {} |
Return object {error: 'error'} when any of:
|
channelMessagesV1Given a channel with ID channelId that the authorised user is a member of, returns up to 50 messages between index "start" and "start + 50". Message with index 0 (i.e. the first element in the returned array of messages) is the most recent message in the channel. This function returns a new index "end". If there are more messages to return after this function call, "end" equals "start + 50". If this function has returned the least recent messages in the channel, "end" equals -1 to indicate that there are no more messages to load after this return. |
Parameters:( authUserId, channelId, start )Return type if no error: { messages, start, end } |
Return object {error: 'error'} when any of:
|
userProfileV1For a valid user, returns information about their user ID, email, first name, last name, and handle |
Parameters:( authUserId, uId )Return type if no error: { user } |
Return object {error: 'error'} when any of:
|
clearV1Resets the internal data of the application to its initial state |
Parameters:()Return type if no error: {} |
N/A |
To check an email is valid, you may use the following package and function.
import validator from 'validator';
validator.isEmail('foo@bar.com');A common question asked throughout the project is usually "How can I test this?" or "Can I test this?". In any situation, most things can be tested thoroughly. However, some things can only be tested sparsely, and on some other rare occasions, some things can't be tested at all. A challenge of this project is for you to use your discretion to figure out what to test, and how much to test. Often, you can use the functions you've already written to test new functions in a black-box manner.
The behaviour in which channelMessages returns data is called pagination. It's a commonly used method when it comes to getting theoretially unbounded amounts of data from a server to display on a page in chunks. Most of the timelines you know and love - Facebook, Instagram, LinkedIn - do this.
For example, in iteration 1, if we imagine a user with authUserId 12345 is trying to read messages from channel with ID 6, and this channel has 124 messages in it, 3 calls from the client to the server would be made. These calls, and their corresponding return values would be:
channelMessages(12345, 6, 0) => { [messages], 0, 50 }channelMessages(12345, 6, 50) => { [messages], 50, 100 }channelMessages(12345, 6, 100) => { [messages], 100, -1 }
There are TWO different types of permissions: global permissions and channel-specific permissions. A user's primary permissions are their global permissions. Then the channel permissions are layered on top.
- Global permissions
- Owners (permission id 1), who can also modify other owners' permissions
- Members (permission id 2), who do not have any special permissions
- Channel permissions
- Owners of the channel
- Members of the channel
Additional Rules:
- Global permissions
- All Memes users are global members by default, except for the very first user who signs up, who is a global owner.
- Channel permissions
- A global owner has the same permissions as a channel owner in every channel they're part of. They do not become a channel owner unless explicitly added as one (by a channel owner, or themselves). Hence, if they are removed as a global owner (and are not a channel owner), they will no longer have those channel owner permissions.
| Iteration | Due date | Demonstration to tutor(s) | Assessment weighting (%) |
|---|---|---|---|
| 0 | 10pm Friday 24th February (week 2) | No demonstration | 5% of project mark |
| 1 | 10pm Friday 10th March (week 4) | In YOUR week 5 laboratory | 30% of project mark |
| 2 | 10pm Friday 31st March (week 7) | In YOUR week 8 laboratory | 35% of project mark |
| 3 | 10pm Monday 17th April (week 10) | No demonstration | 30% of project mark |
| 4 | 10pm Friday 28th April (week 11) | Video presentation | 30% of course mark |
There is no late penalty, as we do not accept late submissions. You will be assessed on the most recent version of your work at the due date and time.
To submit your work, open up a CSE terminal and run:
$ 1531 submit [iteration] [groupname]
For example:
$ 1531 submit iteration1 W11A_EGGS
Only one person per group needs to run this command. This will submit a copy of your latest git commit to our systems for automarking.
NOTE: Our automarking will be run on your master branch at the time of submission, the 1531 submit command is for record-keeping purposes only.
Your tutor will also request you pull up this copy when marking you in the demonstration.
If the deadline is approaching and you have features that are either untested or failing their tests, DO NOT MERGE IN THOSE MERGE REQUESTS. In some cases, your tutor will look at unmerged branches and may allocate some reduced marks for incomplete functionality, but master should only contain working code.
Minor isolated fixes after the due date are allowed but carry a penalty to the automark, if the automark after re-running the autotests is greater than your initial automark. This penalty can be up to 30% of the automark for that iteration, depending on the number and nature of your fixes. Note that if the re-run automark after penalty is lower than your initial mark, we will keep your initial mark, meaning your automark cannot decrease after a re-run. E.g. imagine that your initial automark is 50%, on re-run you get a raw automark of 70%, and your fixes attract a 30% penalty: since the 30% penalty will reduce the mark of 70% to 49%, your final automark will still be 50% (i.e. your initial mark).
If you want to have your automarking re-run:
- create a branch, e.g.
iter1-fix, based off the submission commit - make the minimal number of necessary changes (i.e. only fix the trivial bugs that cost you many automarks)
- push the changes to GitLab
- share the name of the branch with your tutor
The demonstrations in weeks 5 and 8 will take place during your lab sessions. All team members must attend these lab sessions. Team members who do not attend a demonstration may receive a mark of 0 for that iteration. If you are unable to attend a demonstration due to circumstances beyond your control, you must apply for special consideration.
Demonstrations consist of a 15 minute Q&A in front of your tutor and potentially some other students in your tutorial. For online classes, webcams and audio are required to be on during this Q&A (your phone is a good alternative if your laptop/desktop doesn't have a webcam).
Half of each iteration's mark is awarded to your group as a whole. The other half is given to you individually. This is the case for all iterations going forward, except for iteration 4 which is entirely individually marked. For example, if iteration 1 is worth 30% of the project mark, 15% is awarded as a group mark and 15% is awarded individually.
Your individual mark is determined by your tutor, with your group mark as a reference point. Your tutor will look at the following items each iteration to determine your mark:
- Project check-in
- Code contribution
- Tutorial contributions
- Peer assessment
In general, all team members will receive the same mark (a sum of the marks for each iteration), but if you as an individual fail to meet these criteria, your final project mark may be scaled down, most likely quite significantly.
During your lab class, you and your team will conduct a short standup in the presence of your tutor. Each member of the team will briefly state what they have done in the past week, what they intend to do over the next week, and what issues they have faced or are currently facing. This is so your tutor, who is acting as a representative of the client, is kept informed of your progress. They will make note of your presence and may ask you to elaborate on the work you've done.
Project check-ins are also excellent opportunities for your tutor to provide you with both technical and non-technical guidance.
Your attendance and participation at project check-ins will contribute to your individual mark component for the project. In addition, your tutor will note down any absences from team-organised standups.
These are easy marks. They are marks assumed that you will receive automatically, and are yours to lose if you neglect them.
The following serves as a baseline for expected progress during project check-ins, in the specified weeks. For groups which do not meet this baseline, teamwork marks and/or individual scaling may be impacted.
| Iteration | Week/Check-in | Expected progress |
|---|---|---|
| 0 | Week 2 | Twice-weekly standup meeting times organised, iteration 0 specification has been discussed in a meeting, at least 1 task per person has been assigned |
| 1 | Week 3 | Iteration 1 specification has been discussed in a meeting, at least 1 task per person has been assigned |
| 1 | Week 4 | 1x function per person complete (tests and implementation in master) |
From weeks 2 onward, your individual project mark may be reduced if you do not satisfy the following:
- Attend all tutorials
- Participate in tutorials by asking questions and offering answers
- [online only] Have your web cam on for the duration of the tutorial and lab
We're comfortable with you missing or disengaging with 1 tutorial per term, but for anything more than that please email your tutor. If you cannot meet one of the above criteria, you will likely be directed to special consideration.
These are easy marks. They are marks assumed that you will receive automatically, and are yours to lose if you neglect them.
All team members must contribute code to the project to a generally similar degree. Tutors will assess the degree to which you have contributed by looking at your git history and analysing lines of code, number of commits, timing of commits, etc. If you contribute significantly less code than your team members, your work will be closely examined to determine what scaling needs to be applied.
Note that contributing more code is not a substitute for not contributing documentation.
All team members must contribute documentation to the project to a generally similar degree.
In terms of code documentation, your functions such as authRegister, channelInvite, userProfile, etc. are required to contain comments in JSDoc format, including paramters and return values:
/**
* <Brief description of what the function does>
*
* @param {data type} name - description of paramter
* @param {data type} name - description of parameter
* ...
*
* @returns {data type} - description of condition for return
* @returns {data type} - description of condition for return
*/In each iteration you will be assessed on ensuring that every relevant function in the specification is appropriately documented.
In terms of other documentation (such as reports and other notes in later iterations), we expect that group members will contribute equally.
Note that, contributing more documentation is not a substitute for not contributing code.
At the end of each iteration, there will be a peer assessment survey where you will rate and leave comments about each team member's contribution to the project up until that point.
Your other team members will not be able to see how you rated them or what comments you left in either peer assessment. If your team members give you a less than satisfactory rating, your contribution will be scrutinised and you may find your final mark scaled down.
| Iteration | Link | Opens | Closes |
|---|---|---|---|
| 1 | Click here | 10pm Friday 10th March | 9am Monday 13th March |
| 2 | Click here | 10pm Friday 31st March | 9am Monday 3rd April |
| 3 | Click here | 10pm Monday 17th April | 9am Thursday 20th April |
When a group member does not contribute equally, we are aware it can implicitly have an impact on your own mark by pulling the group mark down (e.g. through not finishing a critical feature), etc.
The first step of any disagreement or issue is always to talk to your team member(s) on the chats in MS teams. Make sure you have:
- Been clear about the issue you feel exists
- Been clear about what you feel needs to happen and in what time frame to feel the issue is resolved
- Gotten clarity that your team member(s) want to make the change.
If you don't feel that the issue is being resolved quickly, you should escalate the issue by talking to your tutor with your group in a project check-in, or alternatively by emailing your tutor privately outlining your issue.
It's imperative that issues are raised to your tutor ASAP, as we are limited in the mark adjustments we can do when issues are raised too late (e.g. we're limited with what we can do if you email your tutor with iteration 2 issues after iteration 2 is due).
Each iteration consists of an automarking component. The particular formula used to calculate this mark is specific to the iteration (and detailed above).
When running your code or tests as part of the automarking, we place a 2.5 minute timer on the running of your group's tests. This is more than enough time to complete everything unless you're doing something very wrong or silly with your code. As long as your tests take under 2.5 minutes to run on the pipeline, you don't have to worry about it potentially taking longer when we run automarking.
In the days preceding iterations 1, 2, and 3's due date, we will be running your code against the actual automarkers (the same ones that determine your final mark) and publishing the results of every group on a leaderboard. The leaderboard will be available here once released.
You must have the code you wish to be tested in your master branch by 10pm the night before leaderboard runs.
The leaderboard will be updated on Monday, Wednesday, and Friday morning during the week that the iteration is due.
Your position and mark on the leaderboard will be referenced against an alias for your group (for privacy). This alias will be emailed to your group in week 3. You are welcome to share your alias with others if you choose! (Up to you.)
The leaderboard gives you a chance to sanity check your automark (without knowing the details of what you did right and wrong), and is just a bit of fun.
If the leaderboard isn't updating for you, try hard-refreshing your browser (Ctrl+R or Command+R), clearing your cache, or opening it in a private window. Also note the HTTP (not HTTPS) in the URL, as the site is only accessible via HTTP.
The work you and your group submit must be your own work. Submission of work partially or completely derived from any other person or jointly written with any other person is not permitted. The penalties for such an offence may include negative marks, automatic failure of the course and possibly other academic discipline. Assignment submissions will be examined both automatically and manually for such submissions.
Relevant scholarship authorities will be informed if students holding scholarships are involved in an incident of plagiarism or other misconduct.
Do not provide or show your project work to any other person, except for your group and the teaching staff of COMP1531. If you knowingly provide or show your assignment work to another person for any reason, and work derived from it is submitted, you may be penalized, even if the work was submitted without your knowledge or consent. This may apply even if your work is submitted by a third party unknown to you.
Note: you will not be penalized if your work has the potential to be taken without your consent or knowledge.