Apple Support is now just a bad joke

Anyone that reads this blog knows I am an Apple fanboy.  And I still am in most respect, but there is one area that the Apple has lost it’s shine:  tech support.

Back a few years ago, Apple support was one of the best around.  If you had an issue, you could be relitively certain that before all was said and done, the issues would be fixed.  And for good reason: you paid for the support.  Nothing was free, but what you paid for you usually got.

It seems, however, with the passing of Steve Jobs, that level of support has also passed.  Now when I call all I get is poorly informed “techs” who know nearly nothing about the products they manufacture.  

My mort recent run-in with the Apple folks is regarding a headless (AKA remote) install of OS X server.

The first woman that was on the phone with me insisted that I had to set up the server initially before I could remotely set it up.  I’m going to say that again to let it sink in:  In order to do a remote setup of an OS X server, I would fist need to set it up locally and then I could set it up remotely.

I told her she was wrongs and that was not the intend function of the remote set up.  She said, yes, that is how it worked.  So I quoted to her from their own web site:

I asked her to read along with me:  “You can set up a new Mac mini Server or Mac Pro with OS X Server by connecting to it via Screen Sharing or Apple Remote Desktop.”

She said, yes, that was true, but you needed to set it up with a monitor and keyboard first.

“A-ha!” I said….”please continue reading with me…stay with me on this”


“See where it says you don’t need a screen or keyboard?”

“Well, yes,” she says, “but that’s after you set it up initially with screen sharing.  That you have to do locally.”
“Really?” I question “Because that’s not how it worked in the class I took.  And reading a little further on the page it says…”

“Please hold”

After a while she came back and told me that this feature was no longer supported.  I questioned that and asked her to show me where it stated that this was no longer support.  On hold for about 20 minutes and she came back and said “I just confirmed that this is no longer supported.”

“Great.  Where is the documentation that states this wildly beneficial feature is no longer available on Mac server?”

“Ummmm….please hold.”

Ten minutes later she returns with a “Senior tech” on the line (this is already after being transfered to the “Enterprise support” department) who states that this “headless install is not supported after 10.7.”

“Really?  Because the date on the web page is December 16, 2013…long after 10.7 was gone. “

“So…” I continue,”either the web page, the class, and the test I just took are wrong, or you are.  Which is it?”

“Please hold.”

About 10 minutes later he returned.  “Hey Ed, yeah, sorry about that.  I forgot that was a feature in this version.”

So finally we began troubleshooting the issue (50+ minutes after I initated the call.  Please remember that to this point all I am doing is convincing the Enterprise support group that the server OS has a feature they don’t even know about.

So now we are rolling on getting a solution to this issue.  Suffice to say, he checked pretty much everything I checked.  A few more holds and he comes back and says “Is this server local?”  I tell him it is on the local network.

“No.  I mean is the server on site there?”

“No.  It is remote.  That’s why I am trying to do a remote install.”

“Oh, well we only support devices that are on the local site.”

I say “So what you are telling me is that you cannot support me doing a remote install because the remote computer isn’t local?”

“Yes.”

“Seriously?  That’s what you are telling me?  You can’t help someone remotely install an server, which is a feature of the opeation system, because the remote system isn’t local?”

“Correct.”

“And you are comforatable telling this to a customer?”

“I’m sorry, but that’s our policy.”

So, kids, remember…when trying to set up that remote Mac server, make sure the remote location is local….

Sheesh!  Apple is really sliding down hill…

So glad I wasted the $3000 on the Apple Helpdesk support.  I’ll be sure not to make that mistake again.

Awsomebar issues in Firefox on a Mac

I use to do the IT support for a company we supported in the same building as us.  On occasion I would have to get into their spam filter and change some settings.  The url was spam.downstairscompany.com.  It was nice and easy to get to and manage.

Once I was relieved of the “joy” of supporting those * ahem * people (they were a pain in the ass) I no longer needed to go to that spam filter.  I did, however, need to go to my spam filter which was named spam.mycompany.com.

Everytime I started typing in Firefox address bar the URL, the one for the comany I no longer supported would come up first.  This seemed like a minor annoyance, but a fear down the road, it was beginning to bother me.
The auto-complete (or autofill) feature in the URL address bar just would not forget that I once supported them, no matter how hard I tried.

I started trting to fix this by deleting my history.  That did nothing.  So I headed over to Mozzila support and came across this link that told me to start typing the address and then press Shift+Delete to clear it from the autocomplete history.

That did nothing.  No matter how many times I tried it, it still did nothing.  It rise highlighting the address and doing it, nothing.  Hover over the URL and do it?  Nothing.  No matter what I did, the Shift+Delete did nothing.

So I started to play with my settings and found that if I wen into preferences and then went to Privacy and set the location option to suggest nothing and that actually worked in eliminating the offending URL, but it wasn’t a pretty solution and honestly I want it to suggest from history and bookmarks.

So I re-enabled it and the offending URL popped right back up again.  Ugh!!!

I figured there had to be something about this in the configuration, so I typed about:config in the address bar and did a search on “autofill”  Here is what I found:

I toggled the browser.urlbar.autofill to false and closed and opened the browser.

Darn…so close.  I was hoping the cache or database would be deleted but no such luck.

So next I decided it was time to perhaps start over.  I made note of the plug-ins I hade installed and made sure everything was up to date with xmarks.  Then once it was done, I opened Firefox and went to help>>Troubleshooting Information.  There I selected the profile finder and chose “Show in finder”

The folder opened, I closed the browserand then, in the profile folder located places.sqlite and remaned it (not deleted) the file to place.old.

Once that was was done I reopened Firefox, re-installed my plug-ins and once again all was right with the world again.  No more annoying autofill URLs.

(As an FYI, I opened the places.old file and Lo and Behold the offending URLS were in there.  So that is where they were coming from.  Attempts to edit the file directly did not achieve the desired results.)

Hope this helps.

A New Blogging Program

One reason I don’t blog as often as I would like is because of the inconvenience of logging into the interface, typing it out, formatting it, checking it, uploading it, blah, blah, blah.  In short, it is easy to avoid doing it.

But perhaps those days are past.  I’m trying ot this new software called Blogo (http://www.getblogo.com)  .  It works (right now) with wordpress blogs and it simplifies the blogging process to to its minimal (yet still functional) components.  Thus far it seems to be working very well.  The set up took all of about three minutes and to walk through the tutorial about double that.

I am hosted by GoDaddy.com so I can vouch for the program working with that host.  I can’t see any reason that it wouldn’t work with any blog host on the WordPress format.  They also mention on the say that they intend future releases to work with other services (Tumblr and Blogger) “coming soon.”

While the interface is minimal, the features aren’t.  Within the app you can add tags and catagories, insert media, change fonts, preview the post, set a post to preview at a specific date, and respond to comments.  Honestly, for day-to-day posting that is really all I personally need.  I found what usually happens is that when I get into my blog I start messing with settings and spend more time doing that than actually writing the blog post.

So, yeah, this might be a good fit for a techie A.D.D. guy like me…

One thing I do not like is that the formatting (namely paragraph breaks) are uploaded correctly however once the post is published, the breaks go away on the post in Blogo.  They are still there in the actual blog post however.  It makes going back and editing kind of a nightmare.

It is currently a “Mac Only” application, available in the App Store, and I am unaware of any plans to expand to the PC or Linux market.  It is still essentially in beta and as such is 50% off in the app store as of this writing ($14.99 US.)

 

Applying Folder Permissions for nested folders

My company uses a network share to place all the customers’ information regarding work we are doing for them in a centralized location.  Each job we do has it’s own folder and each folder has a standardized set of folders.  These folders have information in them that some is available to all users, while some (like financial data) we keep secured.

The structure of the folders looks something like this: (image on the left):

Folders

Note the job numbers are different but all the subsequent folders are all the same.  The issue at hand, however, is that the folders need separate security on each of them (or at least different security on each depending on the content of the folders and what management has asked for different groups to have access to.)  So for my example here, lets say we have two groups, group A and group B

Group A has full control over all the folders, and group B has access to all the folders except Drawings and financials.  The challenge, then, it to apply the correct security permissions.  And while my drawing here shows just two folders, my real-life challenge consisted of hundreds of folders like this.

My original attempt was to create a template folder that had all of the correct permissions assigned to it.  I then created a batch file for the end users to run to create a new folder with the correct security pemissions.  The batch file was just a simple running of Robocopy with the /sec switch implemented.

 

ROBOCOPY \\Souceserver\Share\Template \\Sourceserver\Share\!New /E /SEC

That’s put in a batch file and then the end user, when they start a new job, are supposed to run the batch file and then rename the folder that is created “!New”

The problem is relying on the end users to actually follow procedure is kind of pointless.  They never do it as it is much easier to copy and paste the new folder than to double click a batch file.

So the issue that was presented to me was to apply the correct security consistently through hundreds of folders and multiple companies and various geographical sites and networks.

Ugh!

So my fist thought was some soft of batch script, but that quickly became a nightmare due to the differences in file structure on the server.

I then turned to Powershell.  While I had a minimal experience with powershell, with most of it being in the Exchange version of powershell, I knew of the awesome power of it and figured if anything could apply these permissions correctly it would be powershell.

Off to Google I went.  I found many different solutions on how to assign permissions to a folder, but most of them were in reference to creating a a folder and assigning permissions to the newly created folder.  More specifically, they were usually talking in reference to user folders in the home directory.  I wanted to change permissions on folders that were already existing.

So I broke the problem down into two parts.  The first part was to identify the folders to have their permissions changed and the second part was to actually change the permission.

I decided to create two variables.  The first was the sub-folders I wanted to find and the second was the variable containing the path of the folders including the sub-folders.  So in my example to the left, the first variable would be a static list of the sub-folders I want to change the permissions on (Drawings and Financials.)  The second variable would be the entire path of the folder.  So, again from the example to the left, our path would end up being \\Root\Job1\Financials, \\Root\Job1\Drawings , \\Root\Job2\Financials and \\Root\Job2\Drawings.  I wanted to do it with two variables in case I need to change the name of the sub-folders later on.  And by piping the first variable into the second it made of a nice clean method of making changes relatively easily.

But I am getting a little ahead of myself.

I started by trying to find all the folders in the location that had the name “Financials in them.

$Folders=Get-ChildItem -Filter “Financials” -Recurse -Path “\\Root\Job1\”

This would return hundreds of the following:

Directory: \\root\Job96868\Financials
Mode LastWriteTime Length Name
—- ————- —— —-
d—- 10/18/2013 8:59 AM Contract_PO

Not particularly useful, but I was on the right path in isolating the desired directories.

Looking into the Get-ChildItem a bit more I came up with changing the directory and running the following.

$Folders=Get-ChildItem -Recurse -Filter “Financials”| ?{ $_.PSIsContainer } | Select-Object FullName

Using the ?{$_PSIsContainer} allowed me isolate just the folders with the name “Financials.”  At the first pass, I thought I had it, but unfortunately the “Select-Object FullName” also returned the header FullName@(\\Root\Job1\Financials) which I could not use to apply ACLs to.  Somehow I needed to return only the path name.

Finally I  came up with the following:

$Folders=Get-ChildItem -Recurse -Filter “Financial”| ?{ $_.PSIsContainer } | Select -Expand FullName

This finally would fill the $Folders variable with just the full path of the Financials folder.  Now on to applying the permissions.  This was a matter for creating a loop for each item in the $Folders variable and applying the changes.  In the end, here is what the powershell script looked like:

cd \\Root
$Subfolders=@(“Financials”,”Drawings”)
$Folders=Get-ChildItem -Recurse -Include $Subfolders | ?{ $_.PSIsContainer } | Select -Expand FullName
$sid = “Group B”
foreach ($i in $Folders) {
$Acl = Get-Acl $i
$acl.SetAccessRuleProtection($False, $True)
$rule = New-Object System.Security.AccessControl.FileSystemAccessRule -ArgumentList @($SID,”FullControl”,”ObjectInherit, ContainerInherit”,”None”,”Deny”)
$acl.AddAccessRule($rule)
Set-Acl $i $acl
}

So to explain what is happening here:

  1. Change the directory to the correct share
  2. Create a Variable to include the subfolders that you want to apply the ACL to.
  3. Create the $Folders Variable and populate the variable with the correct file Paths.  Note tht it is including only the folders defines by the $Subfolders variable.
  4. Create a variable for the groups which you want to Apply the permissions for.
  5. Create a loop that applies the ACLs to each “line” in the $Folders variable.
  6. Read the current ACL values into the $Acl variable.
  7. Protect the current ACL values and their inheritance.
  8. Create a $Rule variable to include the permission you want to include for the folder in question
  9. Apply the Permissions/ACL

I then created a ps1 file for this and used task manager to have it run the powershell script every hour so any new folders created will have the correct permissions applied.

Hope this helps!

Ed

Android vs. Apple….score one for Android.

I’ve been a fan of Apple since, well, about 2006 when I got my first Macbook Pro.  I know in the “real world” that might not be all that long, but in the IT world that’s a long time.  To put it in perspective, Twitter was still a startup and had yet to exceed 20,000 messages a day on the whole service.  All of my droidcomputers, both at work and and at home, are Apple.  My phones are all Apple.  All my tablets are Apple.  Except two.  I have an HP Touchpad that is mostly retired now and a museum piece and a 1st Gen Samsung Galaxy Tab.  I recently fired up the Galaxy Tab for the first time since July 2013 and found one thing that Android is doing much better than Apple: supporting older devices.

My first iPad, a first Gen 32 GB iPad, was bought about a week after initial release.  But as time has marched on, less and less apps are available for it.  The OS can no longer be updated, and each day, the apps slowly go bye-bye.  Eventually it will only be good for surfing the internet and maybe checking mail.

But imagine my surprise when I fired up my Android tablet to find that, not only did the older Apps work, but apps that I couldn’t get on the Android tablet before were now available.  More to the point, apps that would no longer run on my first gen iPad were now available and running on my first gen Galaxy Tab!  Yikes!

So while Apple has abandoned, for the most part, early devices the Android environment continues to embrace and develop for these golden oldies.

So if longevity is a metric you use to base your decision on which tablet to buy, it looks like Android is the platform for you.

 

A-ha! A new host!!!

Well, after a long time of really being frustrated by Squarespace and their format and such, I made the leap to WordPress hosted service by GoDaddy.  Thus far, I am really liking the features and such and I really like the price (much, much much cheaper than  SquareSpace.)  I guess the final straw was when Squarespace decided to stop supporting the Squarespace 5 mobile app to force all the end users to version 6.  That part I don’t mind so much, but it was the fact that, in order to get the same features an benefits that I had with squarespace 5 I would have to pay about 20% more.  I understand and can appreciate the need for change, but if you are going to force a long time customer to change you should offer the same benefits at the same price.

And honestly, I know prices go up as a standard practice, but this is one area where that does not hold true necessarily.  In IT, the price of storage and bandwidth go down over time.

And the support has really gone down hill recently,so it was time to say goodbye to SquareSpace and Hello to WordPress.  At least now with WordPress I can take my blog almost wherever I want.  🙂

Apple maps, you suck!

All I wanted to do is go get me some Mongolian Beef extra crispy at PF Chang’s. Was that asking so much? Instead, you put me in the middle of nowhere between a parking garage and a construction site. Then you had the audacity to mock me with “Arrived at PF Chang’s!!!”

I hate you, Apple Maps!

ActivEcho Review

One of the challenges that network admins face these days is the constant fight against the cloud storage system like Dropbox, OneDrive, Google drive, box, etc. These systems are very appealing to end users but are a never-ending source of frustration and a huge security issue for the organization. In addition to being a huge open hole for data to leave the friendly confines of the organization, and in addition to not having adequate password and/or encryption security on the data being removed, the terms of service of these services normally do not align themselves with the best interest of the company.

With that in mind I set off to find a solution that would be a good substitute for Dropbox.

It was almost by mistake that I happened about ActivEcho. I was actually looking for a solution to allow a MobilEcho type interface on Macs and PCs that weren’t on the network. I was asking my sales rep at Acronis about this issue and he suggested ActivEcho.

While not what I was looking for originally, the users and abilities of this software quickly piqued my interest. This piece of software was far and away better than the Dropbox that most of my users were using. Some of the features that made it superior are:

1) Essentially unlimited storage. It is only limited by the size of the storage array I attach to the server.

2) Integrated with Active Directory (and thus enforcing my password security policies on it.)

3) Encrypted file storage

4) The ability to restore precious versions.

5) Logging to see what happened when and by whom.

6) No Terms of service that conflict with proprietary rights.

7) Price based on user base and not storage amount.

8) Data is hosted in house on our servers.

9) Integrates completely with MobilEcho on mobile devices.

ActivEcho has a web interface as well as a PC and Mac client.

Because my network is spread out and connected via expensive MPLS data lines, I prefer not to use them to transfer the data to the ActivEcho server, so I configured my DNS to give the external IP address of the server and the configured the local firewall to redirect that IP address to the correct internal IP address. Another way to do it would be to put the server in a DMZ behind the firewall. So the one downside to the whole solution was the additional internet bandwidth it requires in my configuration. But still it is better than using the expensive MPLS bandwidth.

The user adaptation has been very high and it a great collaboration tool to use with clients and co-workers alike. I highly recommend this software for anyone looking to get our of the Dropbox trap.

http://www.grouplogic.com/enterprise-file-sharing/file-sync/

VMWare, I have bested you!!!

I recently got a new iMac at work to replace my desktop and clunky, heavy HP Elitebook laptop(which clocks in at a mighty 8.25 pounds).  The idea was to slim down to an iMac and a Macbook Pro and use VMWare Fusion to run the Windows operating system for the items I could not do with the iMac.

Everything started off great.  I began by installing Microsoft Office for Mac 2011 (which is pretty great) and then a few other nifty little programs I cannot live without on my Mac (Dropbox, LastPass, XMarks, KeePassX, Jump Desktop, and Evernote) and then downloaded and installed VMWare Fusion 5.

Pretty sure I was on my way doing a migration from my clunky 32-bit Windows 7 machine to the new Mac.  I kicked it off and went home.  The next morning, the migration was complete and I fired up the VM with no issues. Yes!  One down, two (or three) to go.migration

The next one was an already existing VM for Windows XP, and that was simply moving the VM from an old machine to the iMac.  About an hour later that was finished (And the was moving from VMWare workstation 7 to VMWare Fusion 5.)

So then it came time to do my main PC: An HP Elitebook 8540w with 8 GB of RAM and 4 CPUs.  As I had great success with the Migration Assistant in Fusion 5, I would give that a shot.  I kicked it off and let it run its nine (ugh) hours.  Sadly, nine hours passed and I got the “migration failed” message.

I tried again (thinking perhaps it was a transient error and got the same result.

So it was time to try the Standalone converter.  I uninstalled the PC Migration agent and installed the new version of VMWare Standalone Converter.  I had a pretty good feeling about this as I had run this converter before and it worked very well.  So I attached an external hard drive to the machine and kicked of the conversion.  It got to 98% and failed.

So I uninstalled the Migration Agent, the Standalone converter, and my AV and firewall, then reinstalled the latest Standalone Converter agent and tried again.  It failed again at 98%.  It was time to call VMWare.

I called and spoke with a person that seemed to want to help at first, but wasn’t real committed to the process.  He remoted into the machine and kicked HDSourceoff another migration, this time setting the processors to 1, the memory to 2 GB, disabling all NIC cards and setting the hard drive pro “preserve source.”

I told him it would take a few hours and I would email him when it was finished.  Naturally it failed again (at the now incredibly frustration 98%) and I emailed him the results and asked him to call me.

The next day he called me (he when home sick apparently) and remoted in to see the results screen.  He asked around a spoke to a few other techs there and they were all in agreement that this issue was unresolvable and nothing further could be done with this PC or to assist me in getting this converted.

I didn’t like this answer.

I thanked him, hung up the phone, opened a web browser and started doing some investigating.  Certainly I couldn’t be the only person with this issue, right?  The frustrating part of this is that I had done a Standalone migration on this PC before and it worked fine.

I took another look at the error that it generated.  The warning before it failed said “Warning: Unable to update the BCD on the destination machine’s system volume” followed by the Error “An error occured during reconfiguration.”  The final status of the job was “FAILFailureErrorVMED: Unable to find the system volume, reconfiguration is not possible.”

So I started my search on the VMWare website and came up with a document that explained how to use BDCEdit to modify the the boot configuration data to allow the Standalone converter to proceed.  It seems there was a 100 MB “system reserve” partition that was defining itself a C: and where my data was as D:. I booted up with the Windows 7 install disc and proceeded as instructed, and retried the migration.  Again, failure.  I went back using the command prompt in the recovery console and BCDEdit shows that the settings had reverted to what they were originally.   I changed them once again and rebooted.  On a hunch I decided to go back in and, once again, the settings had revered to their original settings.  So regardless of how many times I tried this BCDEidt, as long as that system reserve partition existed, the issue would persist.

And this is where it gets a little scary.  I knew I needed to get rid of that system reserve partition, but I also need to make sure I could get back into it if the VM Migration didn’t work.

So I proceeded to make a clone of the original disk.  I used a Apricorn Sata data cable and EZGig data transfer software to make a duplicate of my original drive on a spare laptop HD that I had laying around.  The cloning took about two and half hours but once finished I had a duplicate I could safely play with and not jeopardize the original.

I found a good article on sevenforums.com on how to go about removing the system reserve partition and then another on running the startup repair to restore the boot.ini and such to allow the computer to once again boot.  I ran through the startup repair twice and on the third boot attempt I saw the happy windows screen jump to life prompting me to give it a ctrl+alt+del.

Once in I initiated the Standalone conversion.  After two hours this time, however, I was met with a much-longed for green “Success!” prompt.

I copied the VDMK over to my iMac, imported it in the Fusion and, ta-da, it worked like a charm.

And the techs at VMware said it couldn’t be done…pashaw!

They just weren’t motivated enough.  🙂

Barracuda Spam Filter Out Queue Issue

I recently implemented a Barracuda Spam and Virus Firewall 300 at a business. The configuration was easy enough as they had an Exchange 2010 environment. Having a similar live system at my own company made it even easier as I could just walk through it tab by tab and essentially duplicate the settings I had on my existing system.

 

The implementation from out of the box to installed and filtering took about three hours. Everything looked great. Or so I thought…

 

As usual with these type of things, the unexpected always rears it’s ugly head. This time it was several days after initial implementation. People we complaining about two things: 1) their quarantined mail didn’t have nearly as many items in it as their old system (Postini by Google) and 2) some emails were taking up to eight hours to be delivered.

 

This first issue they raised was a simple explanation: the Barracuda simply was filtering them out rather than quarantining them. I asked them if they wanted me to dial back the filtering to have it put more in the quarantine rather than outright blocking it. “No, no…” they said “we hated all those emails in there. We just didn’t know where they went.”

 

The second issue was a bit trickier. I started looking at the logs and the queues and the Advanced Queue management and sure enough some emails would be queued for delivery but would never make that last jump over to the Exchange server.

 

Now this Exchange server is nothing special, but it is probably overkill for this company’s purposes. It is on a HP Proliant DL380 Gen 5 with 16 GB or RAM and dual processors. Not a monster, but with only ten users, more than enough horsepower to suffice.

 

I contacted Barracuda support and they told me that the spam filter was having issues connecting to the Exchange server and at times it looked like it wasn’t even getting an SMTP connection.

 

I proceeded to go about applying all software updates and firmware patches on as well as the latest device drivers. No dice. Still had the issue.

 

They I replaced the NIC cards, the switch and the cables going from the server to the switch and the Barracuda to the switch. Still nothing.

 

At this point, I talked to Barracuda again and told them that I had pretty much replaced everything I could from a network standpoint, but I was not convinced that it wasn’t an issue with the Barracuda Unit. They shipped me another one and I had it the next day. Applied some firmware updates on the new unit, transferred the configuration and….nothing. Same problem.

 

So back to the drawing board I went. I started to do research the SMTP connector on Exchange 2010. After a few hours of searching, I hit pay dirt. I opened the Exchange Management Shell (EMS) and typed the following:

Get-ReceiveConnector -Identity “Default <Servername>” | fl

It then proceeded to list several key pieces of information. The based on the research I did, I changed the following settings:

Set-ReceiveConnector -Identity “Default <Servername>” -ConnectionTimeout 00:10:00

Set-ReceiveConnector -Identity “Default <Servername>” -ConnectionInactivityTimeout 00:10:00

Set-ReceiveConnector -Identity “Default <Servername>” -Banner “220 SMTP OK”

These ccould actually be combined to the following:

Set-ReceiveConnector -Identity “Default <Servername>” -ConnectionTimeout 00:10:00 -ConnectionInactivityTimeout 00:10:00 -Banner “220 SMTP OK”

So I applied these settings and went back to my Barracuda and checked the queue….and I waited.

 

And waited.

 

And waited.

 

And nothing happened. The mail sailed right through just like it should. Problem solved.

 

Hopefully someone else will find this information useful and won’t have to jump through all the hoops I did.