Tag: 上海干磨300微信群

Orillia OPP use GPS to track down stolen construction equipment

A 26-year-old London man is charged after Orillia OPP used a GPS tracker to find a stolen trailer and skid-steer loader.

Officers recovered the stolen goods, valued at more than $100,000, after they learned a tow truck was hauling the construction equipment north on Hwy. 400 Sept. 26.

The GPS tracker was installed in the equipment.

Officers pulled over the tow truck on Hwy. 11 south of Orillia in Oro-Medonte.

The suspect was charged with possession of stolen property over $5,000 and released on a promise to appear in Orillia court Nov. 24.

Can algorithms be biased or harmful? Yes, here’s how

Algorithms have changed modern society for the better in a number of ways, through advances in technology, optimized experiences online and so much more.

But what happens when the algorithm that gets built ends up doing more harm than good? At what point can an algorithm fail?

READ MORE: 

We asked experts to weigh in on algorithm bias and just how much control a data scientist actually has over their own creation. 

How can algorithms be biased or harmful?

When comparing two effective algorithms that are producing results for big tech companies, there’s really no such thing as a good or bad algorithm.

It’s more complicated than that.

The algorithm itself is an objective tool to get from a problem to a solution. It’s typically built by computer or data scientists to learn from certain data sets and then work to solve a specific problem.

The quality of the data set affects the outcome. This means that if the data is biased, so is the algorithm, even if the algorithm itself may be producing the intended result, said Stephen Chen, associate professor of information technology at York University.

A few years back, he noted, that was filtering out women by learning from a data set of resumes that reflected a male-dominated industry. Based on the input, the algorithm had “learned” that male candidates were best suited for the job. 

“If you have skewed data on the input, you will basically reinforce your bias,” he said. 

Chen said the same can be said for discrimination in racialized communities, using U.S. health care predictions as an example. 

“If racialized people have less access to health care, then all the AI algorithms will predict less access to health care,” he continued. “It codifies past discrimination.”

In addition to perpetuating biases and discrimination, algorithms also have the power to predict certain behaviours or circumstances that can also hurt specific groups of people, said Salma Karray, a marketing professor at Ontario Tech University.

For example, Karray explained, there was a case in the U.S. in 2012 involving Target that sparked privacy concerns after the retailer used an algorithm to  and sent her a maternity pamphlet in the mail, before her father found out.

As well, she said, an algorithm can also potentially encourage addictive behaviours, such as gambling, by targeting users who appear interested in a certain activity online. 

Can algorithms be controlled?

A self-accelerating vehicle speeds up at the wrong time and kills its driver. This is a real-life example of an algorithm gone wrong, Chen said. 

But from a results perspective, this machine’s algorithm technically learned what it was expected to learn: how to automatically drive a vehicle.

“There’s nothing more dangerous than assuming your code works because it gives you the result you were expecting,” Chen said.

He said there’s something in machine intelligence that is different from human intelligence, and that is something that we, as humans, may never understand. 

“If the machines have a different form of intelligence and do things differently, and receive things and understand things differently, then if we give control to the algorithms, to something that we don’t fully understand, do we have control over them still?” he asked.

“Once you train an algorithm to learn outside of the lab, you no longer have any idea what it’s doing because it has learned something that you did not know it was going to learn. Then it can do things that nobody was expecting.” 

What are the ethics behind algorithms?

While there is on algorithms to make them smarter and better at making decisions, Chen said there is a lack of ethics in the process as a whole, which is more of a historical problem in science in general.

“It’s like, let’s build the bomb first and then talk about the ethical implications about nuclear power after, right?” he said.

Chen said that when it comes to regulations, there is some more robust privacy legislation that has emerged recently in certain parts of the world around big tech companies (such as the General Data Protection Regulation in the EU); however, it is difficult to regulate algorithms.

“It’s really hard to say that you are not allowed to be exposed to an algorithm because the algorithms – they track everything,” he said. “Data is the cost of connectivity.” 

MPP Andrea Khanjin acclaimed as candidate by Barrie-Innisfil Conservative Riding Association

When Barrie-Innisfil residents head to the polls in 2022 for the provincial election, the incumbent MPP will be on the ballot.

On Sept. 28, the Barrie-Innisfil Conservative Riding Association announced that its members had acclaimed Andrea Khanjin, the area’s current MPP, as the Barrie-Innisfil candidate for the 2022 Ontario election.

Khanjin thanked the association online.

“It has been an honour to be your MPP since 2018, and during this time we have achieved so much by taking great ideas from our home and bringing them to Queen’s Park,” she wrote.

The next Ontario election is scheduled to happen on or before June 2, 2022.