beer_pong_motorcontroller

Projects Updates: BEER PONG

by Ryan Summers

Over the course of winter break, I finally got a chance to work a bit on our beer-pong robot. Our parts finally came in and I figured there was nothing better to work on with my newfound free time. Our mechanical engineers created a small mock set-up for the electrical team to work on and I decided to hook our motors up to it!

This gave me an excellent platform to start working on. I hooked up our H bridge motor driver and started looking through some documentation on the controller. I quickly realized that my original plan of using a PIC microcontroller would no longer work effectively in the short term, as they supply 3.3V logic and the controller works off 5V logic. Instead, I decided to switch over to an AVR microcontroller to make use of the 5V logic before I could create a board to level shift the signals off the PIC board. This would come to have much larger implications than I originally expected, but I’ll get to that later. To program the microcontroller, I used the SUBLIBinal library that James Irwin and I developed for the Robosub club. Originally this code was developed for PIC microcontrollers, but we intended to port it over to AVR. This project forced me to finish the port that I was conducting, and the library allowed me to extremely quickly implement the microcontroller functionality without many hang-ups.

I quickly wired together the microcontroller to the motor driver on a breadboard and began writing some code. I got the initial microcontroller code to get the motors spinning, but the set up was far from perfect. I really needed a method to control the motor speeds. This meant that the microcontroller needed to communicate with the computer in some way, so I decided to implement a UART interface. After buying a UART-serial converter and writing up some serial programming code on the computer side, I got an efficient line of communication going so that the computer and the microcontroller could exchange information. However, this set up had quite a few issues with it: 1) The serial interface was through the command line and was extremely difficult to use effectively. 2) It would often hang when attempting to read due to blocking calls, and 3) the motors often became stuck on set-up due to changes in friction and would stop rotating – even when operating at the same PWM duty cycle. Additionally, there was no logical way to specify motor speeds besides duty cycle percentages. To combat the issue of friction and to create an effective way to communicate, I decided to implement a PID feedback control system.

We had ordered some small magnets and some hall-effect sensors for just this purpose, but I had never really worked with PID controllers before. I set up the hall-effect sensors close to the wheels and taped on the magnets to generate an interrupt to the microcontroller whenever they passed by. This allowed me to do some simple math on the data to provide an RPM value to the microcontroller. Now, I could send the microcontroller a desired RPM and it would have a value for its current RPMs as well. The next step was to implement some form of control system so that the microcontroller could dynamically adjust it’s PWM duty cycle to achieve it’s desired RPMs. After doing a small amount of research, I realized that implementing a PID controller within a microcontroller was actually quite a simple task! By storing just a bit of previous state information, the microcontroller can periodically generate an update to the PWM duty cycle through the feedback from the hall-effect sensors. After tinkering around with the parameters on the PID controller, I managed to find some rough values that were giving me back consistently held RPMs.

Now that the issue of logical communication and a control system had been solved, I knew that I needed to fix the computer-side interface. I had done some previous work with Qt for GUIs with the Robosub club, so I decided that creating some form of GUI would be the easiest way to interface with the microcontroller. This way, I could update the current RPMs of the microcontroller and set goal values all in real-time. However, this didn’t quite address the issue of the serial port holding application execution. After some thought, I realized I could use a thread for reading and writing to the serial port, and then run the GUI through a separate application. Through the course of a day, I wrote up a small GUI to display information on the screen so that anyone can set the goals easily. Below is the GUI after a number of improvements and iterations!

beer_pong_gui

(Note: ‘RPM Setting’ should be ‘Duty Cycle Setting’ above – it’s still quite rough!)

Now, we have set up an interface for the computer science team to develop a computer-side application to control the motors. This allows us to move to our next step of the project to create an application that can start using visual feedback to adjust the shots that it takes. It’s going to be exciting to see where this project goes in the next few weeks!

beer_pong_motorcontroller

Update: Due to some immense errors with the ATMega chip causing shutdowns and periodic reboots, I decided to switch over hardware to the PIC32MX250F128B. The library we are using is much more rigorously tested on this chip and we have more available debugging tools for correcting code errors. Below is a 3D model of the new board that has been ordered for interfacing with the motor controller!

12307562_1644752752440839_9194676786276218271_o

1st Hardware Hackathon was a success!

by Gabriel de la Cruz

75 students from different majors including Computer Science, Computer Engineering, Electrical Engineering, Mechanical Engineering, and Material Science Engineering, participated the 1st Hardware Hackathon during November 14-15, 2015 held in Intelligent Robot Learning Laboratory and Frank Innovation Zone.

Robotics Club was one of the co-organizers of this event. To view this year’s winners, please visit the Hardware Hackathon’s website.

Below are few of the pictures taken during the event. For more pictures, visit the event’s Facebook page.

IMG_2054

12303960_1644755945773853_3657648770616067101_o 12243400_1640123202903794_2546560678867919461_n 12247150_1640277609555020_5040056185698431234_n 12314169_1644753569107424_536160852882364030_o 12314263_1644751759107605_2218528822768199081_o 12322571_1644756009107180_6021405095548061039_o 12291152_1644754569107324_7138198047531442981_o 12314156_1644754535773994_6576352576536850125_o 12304025_1644755715773876_9089631623766301917_o 12339644_1644752952440819_4659809060487194745_o

12620775_998853850186180_1714186824_o

Project Updates: Battlebot

by Brandon Townsend

The Combat Robot Project is currently in the final prototyping stage of the wedge-bot. As of now, we are designing the belt-drive system for the wheels and waiting for parts to come in. We have decided to use existing pieces of scrap-extruded aluminum and steel plating to fabricate the beta frame. However, the final frame will have ⅛-in aluminum plates for armor in order to keep the weight within the required limitations. For the final body of the wedge-bot, we will most likely have it fabricated by a newly found resource. We are planning to have only part of the team carry out the above plan and have the other members of our team begin production on the saw-bot for the sake of efficiency.

IMG_2695

Project Updates: Prosthetic Hand

by Bryce Johnson

IMG_8061The second half of the semester was spent completing construction of the hand as well as working on the control of the hand. Currently the hand is completely built and has basic movement capabilities such as open and closing and basic thumb movement. We have started modeling the hand in CAD software in order to create a separate hand in the future with additional improvements we feel are necessary to the existing design. We have also researched possible EEG sensor configurations as well as funding opportunities for these sensors.

Next semester we plan to continue working on the control system for the hand. We would like the hand to have independent articulation with each finger. Over the course of the spring semester we propose to look at two separate user inputs for hand movement: a mimicking glove design and EEG sensors.

IMG_2053

Projects Updates: BEER PONG

by Gabriel de la Cruz

The two teams started planning and designing early in the semester on what types of pong launcher to build. Using SolidWorks, ME students Christian Ziruk (Team 1) and Jessie Bryant (Team B) build CAD designs for the pong launchers. Both teams came up with pretty similar approaches of using spinning motors that is similar to a tennis ball launcher. What differs is Team 1, will use only one motor while Team B will use two. The videos below will show a prototype of Team B’s pong launcher.

While the ME’s were busy working on the design, the CS members worked on how to interface with an XBox controller which will be used to control our robot in the 1st Phase of this project. Then they proceeded on learning how to serially communicate with a microcontroller using Python. Additionally, the CS team had some discussions on how to frame vision processing for the robot’s autonomy once we get everything working. This includes how to detect the ping pong ball especially when it bounces.

The EE’s and CpE’s at their end, worked closely with the ME’s to come up with a list of electrical and mechanical parts that will be used to build the robot. However, in order to minimize the shipping cost, we had to ensure both teams has a complete parts list, which has costs us some valuable work time.

We also had to write a proposal for this project to secure a sponsorship for financial support from WSU IEEE Student Chapter which we successfully were granted, and we thank the organization for their benevolence.

Time wasn’t really in our side this semester as most of us were also co-organizers and volunteers for the 1st Hardware Hackathon. And a lot of our members are also members of the Palouse RoboSub Club.

However, on a positive note, we have ordered and received all the parts we need to build the pong launcher. This will allow our EE’s to start building the circuits and programming the microcontrollers on our first Robotics Club meeting next semester. While the CS members will have to wait for the ME’s and EE’s to get the structure built, the CS members will continue pursuing early next semester on solving the vision problems in order for our robot to be autonomous. We will also help the EE’s in coding the microcontrollers.

Below are videos of Team B’s pong launcher prototype. A shout out to our mechanical engineering members: Jessie Bryant and Vitaliy Kubay for working on their spare times  in order to build a physical prototype before our last Robotics Club meeting. And we also would like to thank the Frank Innovation Zone for allowing us to use their equipments to build the frames for the launcher.

20151031_123321

Project Updates: Rover

by Tucker Stone

The Mars Rover team spent the majority of the Fall 2015 semester working on our proposal to gain admission into the NASA Robo-Ops competition. The combined effort of our team, although admirable, ultimately was not enough to be one of the selected teams to gain entry in to the 2016 competition.

Although our ultimate goal of being entered into next year’s competition was not achieved, the team continued to work hard towards the advancement of the rover’s technical capabilities. The programming team built new code to support the robotic arm, as well as integrating the independent six-wheel drive motors.

While the programming team continued to advance the rover’s capability from a coding aspect, the mechanical team continues to struggle modifying the rover’s current design into a working system. Largely a result of the design completed in previous semesters, the team has had to spend considerable amounts of time modifying and redesigning systems that in design seem feasible but in reality do not operate as intended.

20151010_131456 20151031_123215 20151031_141148 IMG_20151017_140709

12349424_725232744280027_472102085_o

Project Updates: Robotic Arm

By: Marcus Blaisdell

This semester, our objectives were to transfer the existing vision system from Python to C++. Vitaly has ported the majority of the OpenCV code to C++ and we can use it to detect different contrasts. We have purchased a Kinect camera and modified it to use standard USB and have it running on a PC. Marcus has learned how to write C++ code to write to the serial port to get the C++ programs to control the Arduino microsontroller that is controlling the arm. We are going to focus on getting C++ OpenCV to recognize shapes and then improve that to recognize chess pieces.

The featured image is a selfie taken from the functioning Kinect of Marcus, Conner, and Kily (from left to right).

12315364_725233004280001_527492016_o

Marcus soldering Kinect wires.

12333831_725232900946678_504836928_o

Kily cutting and connecting Kinect wires as we modify the Kinect to use standard USB to connect to a PC/Mac.

HardwareHackathonWSU

Robotics Club is co-organizing the 1st Annual Hardware Hackathon

by Gabriel de la Cruz

The WSU IEEE Student Chapter spearheaded the fruition of the 1st Annual Hardware Hackathon that is happening this Saturday, November 14, 2015. This is co-organized together with Robotics Club and Palouse RoboSub Club.

We want to thank the generosity of following sponsors: Digilent Inc, Voiland College of Engineering and Architecture and School of Electrical Engineering and Computer Science.

Below are links to media articles:
WSU News
Daily Evergreen

12227462_718284821641486_873819787_o

Robotics + Arts = Robobble [UPDATED]

By: Marcus Blaisdell

ROBOBBLE: : An Interactive Form-Making Installation from Saleh Kalantari on Vimeo.

During the spring semester of 2015, two architecture professors, Saleh Kalantari and Ebrahim Poustanchi came to the Robotics Club to ask for help in creating an interactive art project they called “Robobble“.

This project was described as a changeable, amorphous blob that could be controlled by a cell phone app.

I, Marcus Blaisdell, together with Austin Bonnes and Tim Pizzino all volunteered to help.

The team started meeting and discussing what was required to create this project. The architects first envisioned one hundred, a meter long, telescoping rods that would be arranged in a sphere and control the shape of the piece. We investigated our options of what could actually extend and retract. Some of the ideas considered were hydraulic extenders and linear actuators.

12218902_718284798308155_1637129464_oThe team decided on linear actuators. Austin located several options online and the team considered them and decided to order two different versions for evaluation. The first actuator to arrive was notably heavy but performed well. The speed was pretty slow for what the architects wanted and they asked for other ideas but no one had any.

For control, Arduino Mega seemed the obvious choice. It has 54 digital I/O pins and when the size of the linear actuators was taken into consideration, the overall piece was reduced to only twenty actuators arranged in a duo-decagon. Each actuator would require two I/O pins to control the bidirectional movement, which the Mega can handle.

In terms of  wireless communication and control, bluetooth technology seemed to be the best option since all modern smartphones have it natively.

To power the actuators with the Arduino, we would need some type of motor controller. The architects wanted to keep the entire project under $3,000 and the linear actuators were priced at $140/ea so most of our budget was taken right there. I consulted with an electrical engineer at Schweitzer Engineering Laboratories, Andrew Gulbrandsen, who suggested we use transistors in an H-Bridge configuration. This was investigated and the cost seemed very reasonable, at approximately $4 per motor. I proceeded in creating a parts list, which were then ordered.

When the new linear actuators arrived and were tested, they were all found to be defective. They would wobble terribly. It appeared that they were not perpendicular in their bases and would rotate in a circle that was quite pronounced at the end of the arm. The pieces were returned and as we were at the end of the semester, the project was put on hold.

As the Fall 2015 semester began, the project was started again and new actuators were ordered. Once they arrived, I attempted to construct the H-Bridge controllers but found that the incorrect transistors were ordered and they were too underpowered for this application. We needed them to fully turn on with the 5V output from the Arduino but they required 12V to fully turn on. Marcus then consulted with two electrical engineers from SEL, Andrew Gulbrandsen and Doug Bruns, as well as EE students, Matt Foreman of the Robotics Club, and Ryan Summers of the RoboSub Club. Everyone agreed that the transistors would not be able to handle this task. We were left with the option of either trying to order more transistors and continue with the H-Bridge or order regular motor controllers. I found some motor controllers, DROK L298N, that could handle two 12V motors per and were only $8.20/ea. Ten of them were ordered and tested, which we have found to function perfectly.

The original power supply for the project was a laptop power supply that worked very well for testing a single linear actuator. Due to a communications error, seven additional laptop power supplies were ordered with the expectation that they would be used in combination to power twenty motors. This proved to be unreliable. The power supplies were not able to provide enough current and so two Robotics Club members, Connor Cole and Matt Foreman, suggested a PC power supply. I contacted VGH and found that they had a 900W in stock that was capable of 40A at 12V. This was purchased and hooked up and had more than enough power for all twenty motors simultaneously.

12228018_718285151641453_166285482_o

For the cell phone app, nobody had any experience or knowledge of how to build one. However, Matt Foreman suggested that we should try using the MIT App Inventor. This was found to be very easy to use and was utilized to create the app.

The architects handled the construction of the physical piece and I did the wiring and programming. The first version of the firmware and phone app was trying to use a slider bar to set the position of each actuator and since the actuators have no position sensing or any sensors at all, this was being approximated with timing. In the end, this proved far too difficult and so it was modified to be simply out, stop, or in with the operator deciding when to stop each motor at the position they desired. This worked much better. Gabriel de la Cruz provided feedback on the Arduino code suggesting to use arrays to handle all of the variables instead of assigning each individually. His suggestion reduced the overall size of the Arduino file by almost 30%!

This project required a lot of help from a lot of people and was truly a group effort with contributions from several members of the Robotics Club, RoboSub Club and Schweitzer Engineering Laboratories. Special thanks go to Ace Hardware for their help with advice and equipment to wire the project together.

The Robotics Club was also mentioned in the WSU News Article.

12242674_718285018308133_878560536_o

This is me working on the electrical wirings.

OpenCV 1

Introduction to Object Detection using Python+OpenCV

by Gabriel de la Cruz

I agreed in teaching a tutorial on vision and image processing as I’ve seen it as a useful skill to have these days which is not limited to robotics. With the increasing sales in smartphone and other mobile devices, we are generating so much data that includes pictures and videos. This entails that we will need more skilled programmers to process these types of data. Although this is not the goal for this tutorial, yet this is just one of the many ways having this skill in a student’s arsenal can be beneficial.

Real-time vision processing is a huge part of most robotics system that aims for full or semi-autonomy — The club have seen the need to introduce to it’s members the very basic concept behind real-time video processing.

In an autonomous robot, it needs to perceive its environment through sensors in order to make logical decisions on how to act in the world. One important sensor in a robot is using a camera. There are different types of high-end camera that would be great for robots like a stereo camera, but for the purpose of introducing the basics, we are just using a simple cheap webcam or the built-in cameras in our laptops.

The tutorial was scheduled for 3 consecutive robotics club meeting. The first tutorial included a discussion on the basics of image processing where it was discussed how videos can be broken down into a sequence of frames or images. Where images can be broken down into pixels and where each pixels can be broken down into a single scalar value or a tuple of 3 scalar values depending on the colorspace of the image. Members learned how to load an image, change the color of a region of pixels, cropping, displaying an image on a window and saving the image back into a file.

On the second tutorial, members then learned how to stream the images from the webcam to the program. In this tutorial, the objective was to identify an object and track it. Members learned the basic steps of detecting an object by simplifying the task with an object that only has one color. The process starts by converting the colorspace from RGB to HSV. Then an image thresholding is done that uses a lower and upper bound to get a binary image output. All pixels within the threshold will have a value of 255 and the rest as zero. At this time, the program needs to identify the biggest contour that can easily be identified from the binary image and extract the outer rectangular bounds of the contour. At last, we can draw the box on the original image. The process will be a continuous cycle of retrieving the next image from the camera stream and applying the same image processing steps.

For the last tutorial, it was more on refining the output from the second tutorial. During the steps from the previous tutorial, noises can be detected since there can be pixels around the object that will fall within the lower and upper bound during thresholding. This can be eliminated by apply gaussian blur to the image, and using erosion and dilation. This tutorial also included how to identify the position of the tracked object relative to the image and putting text in the image.

All files and slides used during the tutorial are available here. The images used during the tutorial are not owned by the club so we highly recommend you use your own images or do not use them other than for the purpose of practicing.

I learned these steps from different articles and codes from the web. If you want to learn what other things you can do with OpenCV. Checkout these websites: