My new Synology DS1618+

I’ve been using my new Synology DS1618+ for a month now and I am impressed with its performance. This machine is a lot more than just a File Server. It started with an E-mail server. I didn’t want to use the Hosted Exchange services of my provider any more because it is just too expensive. It takes some effort and patience to get everything working again but now it is working like a charm. Let me tell you what I’ve installed on the DS1618+ so far.

  • An E-mail server.
  • A Calendar which automatically syncs with all my devices using CalDav.
  • A CardDav server to sync my contacts to all my devices.
  • The Surveilance Station which constantly records all video feeds from 3 security cams.
  • A Git server. Source control for my development.
  • A Subversion Server. Also source control.
  • WordPress which hosts this simple website.
  • A VPN server. For secure connections to my network.
  • Apache Webserver.
  • Active Backup that automatically backs up the laptops in our home.
  • WebDav server and Sinology Drive which is a Cloud solution.
  • Download Station for grabbing stuff from the internet.
  • VirtualHere. This tool gives me USB over IP which I need for the iPad mini in the attic for building iOS apps.
  • Antivirus Essential.
  • And last but nog least a Virtual Machine Manager. I use it to run several operating systems like WIndows 10 and Windows Server 2012. But it can host Linux like operating systems (Ubuntu, etc.).

The system is incredible flexible and runs pretty much 100% of the time without giving a sweat. It has 6 bays for hard disks and I now have about 20 Tbyte online. I expanded the memory from the standerd 4 Gb to 32 Gb. I can reach the Synology from anywhere in the world where I have an internet connection using the Synology QuickConnect service. All documents, Photo’s, Video’s, etc I really want to keep are synced to OneDrive on a regular basis. So if the house burns down, which I hope never happens, I still have the most important stuff safe.

I’ve been a Synology fan for many years now and my old DS414j is also still running beside it’s new big brother. I use it to double backup the things I really want to keep.

In short, this is an amazing machine. It’s not cheap but it saves me money too (no more fees for Hosted Exchange, Web Hosting, etc). And I have all my data at home not in some cloud somewhere.

Web projects I worked on

Wageningen Universiteit & Research

100 year anniversary of the Wageningen University LEB department

A Google maps site showing all the activity of the LEB of the Wageningen University

image

Pachtprijzen

A webpage showing the ‘Pachtprijs gebieden’ in the Netherlands using Leaflet

image

Presentations2Go Scheduler

Presentations2Go released version 5 and I’m responsible for migrating the Scheduler from version 4 to this version. Below you see the Encoder status screen.

image

Course Registration System

For this application I’ve solved many bugs.

The startscreen for the PET School

image

The start screen for the ESD School

image

My experiences developing a native App with C#, Xamarin, MvvmCross and Visual Studio 2017 for Windows.

Introduction

I have over 35 years of experience in programming on environments varying from mainframe, midrange, small systems to PC’s but never have I met so many problems in one development process.

The past 6 months I’ve been developing just 1 app for Android and iOS and it has been an experience that gives me mixed feelings about mobile development. Don’t get me wrong I just love programming in general but this… this was different.

The app

This is what the app is supposed to do:

  • Register the device with UserId and Password via a webservice.
  • Enter a TAN once for the registration and verify it via a webservice. This authenticates the device with the server just once.
  • Add an Application (website) one wishes to verify. There are 5 websites notifications can be received for.
  • Accept push notifications from the server via Firebase when a user wishes to authenticate for a website.
  • When a notification is received:
    • When received from a webbrowser on a PC start a QR code scanner and scan the code that is shown on the webpage. When the information in the QR code equals the information in the notification accept the verification.
    • When received from a mobile device show buttons ‘Accept’ and ‘Cancel’.
      • When ‘Accept’ is clicked tell the server the authentication is accepted via a webservice.
      • When ‘Cancel’ is clicked tell the server the authentication is not accepted via a webservice.
  • The user must be able to add more websites to the registration.
  • The user must be able to remove websites from the registration.

Development choices

It was decided that the app would be built with MvvmCross with native code and not Xamarin.Forms A consultant from a external softwarehouse told us that as long as you stay with the controls provided you’re ok but when you need something extra you would have to write your own renderers which is a pain. And of course he knew MvvmCross very well. For the record, I had more experience with XAML at that time and favoured Forms. Also bad experiences with third party UI controls had influence on the decisions made.

It was also decided we would use PCL for the Core functionality. I would have liked to go with .Net Standard but that was not covering all needed functionality at that time.

MvvmCross forces you into the MVVM pattern which is a good thing. You have to think about reusability of code all the time and separate generic functionality and device specific code as much as possible.

Learning curve

Learning MvvmCross was a first. It’s a good concept and overall works pretty smooth. The MVVM pattern was not new to me which gave me a headstart. The difficulty with PCL in combination with native code is the  communication between the libraries. You can access PCL from native but not (easily) from PCL to native. MvvmCross bridges that gap easily using Inversion of Control (as Forms would have done). With the help of the consultant MvvmCross was learned quickly. It does however have some perks. Because of late binding the linker sometimes does not know that you’re using certain functionality and does not add it to the endresult which leads to confusing situations. But once you get the hang of it you’ll recognize these situations pretty fast. A pitfall is debug and release mode. In debug mode the linker adds everything all the time but it does not do so in release mode causing the app not to work correctly in it’s final release.

Practice

Because Xamarin is under constant development you’re on the bleeding edge and because you have to conform to two (or more) different platforms you better be prepared to meet frequent crashes and hangs, wrong builds without any clues why, slow performing  builds, frustrating provisioning issues with Apple’s paranoid and absurd complex protection system in combination with wrongly cashed information, differences in layout, OS versions, screensizes of the individual devices you’re building for, etc.

Documentation is sparse, incomplete or deprecated and very often not applicable for your development choices. A lot of the examples come from websites like Stackoverflow but are for a plethora of applicable development choices. You can have examples for Swift, Objective-C and C# for Apple in combination with explanations for Xamarin Studio, XCode, Visual Studio on PC or on Mac. Very often you can deduce from the Swift and Objective-C examples what it should be in C# because luckily the interfaces written very often follow the same name conventions but sometimes it’s not clear or just plain difficult. Without the community development would have been very very difficult if not impossible. Solving everything on your own is very time consuming and sometimes very frustrating.

Using the Xamarin designers needs a lot of getting used to. I really had trouble understanding the constraint system used on iOS devices. I’ve been tearing my hair out why a design was shown correctly in the designer but not on the simulator or device. I finally got the hang of it but It’s been a struggle. The designers are not stable at all. For XAML I was able to grasp the syntax in XML quite quick but the XML of iOS views is hugely complex and I still haven’t got that in my skillset so I rely on the designer heavily. The XML of Android views is quite concise and easy to understand so I hardly ever use that designer. For data binding Android views with MvvmCross you have to edit the axml by hand (at least I’ve not found a way via the designer yet).

Bugs, bugs, bugs

Sometimes development progress comes to a grinding halt when again some kind of problem with the development environment crops up. I’ve had hangs during solution loads, during debugging and deployment. Sometimes deployment becomes impossible and you’re stuck being unable to test your app on a device. One day, because I was using my own Samsung S6, I really had to factory reset my it to be able to deploy to it again and I was forced to completely setup it up again (sigh..). Emulators (or simulators as Apple calls them) are a good solution for straightforward development and deployment to them is pretty fast. But not all functionality a real device can do can be done on an emulator. For instance Firebase is not supported in emulators. It is therefore imperative your final tests are on a device. This also causes difficulty. Who of us has all the variations in devices being used in the world? We use an iPad Pro for iOS and a Motorola G Play for Android testing. It has iOS 10 but we do not have a iOS 9 device. Some code is specific for these devices (Firebase) so how do we test that? Xamarin offers a service with lot’s of devices but that is pretty expensive to use.

You can be faced with the situation that the day before everything was working perfectly and the next day (after a fresh boot) everything seems broken. Builds just aren’t stable at the moment. Sometimes you end up with the situation having to clean up and rebuild your solution. Sometimes even that does not help and you have to manually remove all ‘obj’ and ‘bin’ directories just to get going again. In my experience changing something in PCL will cause wrong builds so you better be cleaning and rebuilding when changing something in PCL. And it takes a lot of time.

Testing

Turnaround times are just plain bad. Every build and deployment using a fast computer with a SSD and lot’s of memory still takes at least up to three minutes for a complete build(depending on the size of your solution of course) meaning that the a small change can take a relatively long time before it can be tested. For iOS development you have to have a Mac. We use a 2012 refurbished Macbook and it is pretty fast. All communication goes through a network connection and the Mac only has a 100Mbit port. Even designing a view in the Xamarin designer causes a generation process to start on the Mac. Although this is a bottleneck builds on a Mac are faster than the local builds for Android. Deploying to a real device is slower than deploying to an emulator for both iOS and Android. Frustrating is the fact that when you loose connection to the Mac everything just stops working. I very often loose connection and reconnect does not always work. You’ll have to restart Visual Studio to be able to reconnect.

Debugging

Debugging is pretty good. There are limitations however. You can not edit and continue and you can not change the position of the current execution point. Because you’re testing against native code you’ll very often see that you’re in external code rather than in your own code when an exception occurs but that is not uncommon in other development too. The logs on the devices help a lot during debugging. You will have to filter out logging for your app because when you run on a device with a lot of installed apps logs can become very bulky. Stacktraces are very often native too (Java on Android and Objective-C on iOS). You’ll have to learn how to interpret these but most of the time it is pretty clear although I’ve seen some pretty confusing errors being thrown that put you off track. To quote a very famous Britisch detective: “If you can’t find the solution in the explainable investigate the unexplained.”.

Conclusion

Mobile development with Xamarin is time consuming and thus expensive. The platform, although it has been around quite a long time, is not stable. Because of the many different Development environments, Platforms and Programming possibilities the amount of knowledge and skills needed is huge making the learning curve pretty steep. Documentation is sparse, incomplete and often deprecated. The community is huge and very active. Before deciding about Development environment and architecture (Forms or Native, MvvmCross, PCL or .Net Standard) one should investigate the (im)possibilities very thorough. Once you have made you choices and head on with development it is hard to change.

IP address has been blocked by SSH

Yesterday my Synology reported me that an attempt had been made to get into the Terminal (SSH) and I was shocked. How did they get on my network in the first place. I started investigating the issue.

image

At first I thought I was really hacked but this was not the case. Synology uses 2 modern features here:

  1. UPnP – which is the Plug and Play for your local network
  2. Automatic blocking

UPnP

UPnP (Universal Plug and Play) is a protocol that allows devices on your network to automatically connect to other devices. A good example of the use of UPnP is for instance DLNA (Digital Living Network Alliance). DLNA allows for streaming video, music, photo’s, etc. on your network.

Automatic blocking

Automatich blocking is a feature that secures your Synology NAS automatically. Take a look at Configuration –> Security –> Automatic blocking.

KPN router

We have a subscription with KPN in the Netherlands. In our home the KPN router is installed (KPN Experience H368N). I think I enabled the UPnP IGD myself but it is possible that it was enabled by default.

image

The Synlogy NAS has UPnP activated by default.

With both devices having UPnP enabled they can talk to each other. The Synology will say to the router: “Hey I would like to open port X to the outside world and map it to my internal port Y”. The router will answer: “Ok, no problem, done”.

The result of this is that several ports of the Synology are opened automatically to the internet. From that moment on the router will send any input for port 443 to the Synology NAS. Below is a list of ports automatically opened in the router by several UPnP devices in my network.

image

One of the ports being exposed is port 443 which is the SSH port on my Synology. Any hacker on the net will find my IP address and scan for obvious ports like 443 and try to log in with obvious userids and passwords.

So in short: There was nothing hacked in my system. My network was not invaded by anyone. Automatic blocking of the Synology simply blocked the attacker for several hours after 10 attempts (I’ve lowered that to 3 attempts just for now but plan to disable SSH all together in the near future). And that’s what was reported in the first place (‘IP address <nbr> of <internal device name> has been blocked by SSH’). I can sleep peacefully again Glimlach

DataObjects.Net code T4 generation

DataObjects.Net is a very versatile Object Relational Mapper (ORM). What an ORM does is solve the relation object-oriented clash we developers always have when we program in OO languages like C#, Java, C++, etc in combination with a Relational Database like f.i. Microsoft SQL Server. After careful deliberation we decided to use DataObjects.Net (DO) as our ORM.

We also wanted to use T4 Code Generation for our ViewModel and extension of the Data Model. Now DO does not support T4 out of the box. In a previous project we wrote a library that uses reflection to dissect the compiled Assembly produced by the compiler in combination with Postsharp (used by DO to inject code).

Now in DO, which is primarily a ‘code first’ tool, you define your Data Model in code using Attribute classes. Here’s an example.

using Xtensive.Orm;

namespace T4DB.Entities
{
    [HierarchyRoot, KeyGenerator(Name = "Root")]
    [TableMapping("Root")]
    public class RootEntity : Entity
    {
        [Key]
        [Field(Nullable = false)]
        public int Id { get; private set; }
    }
}

using Xtensive.Orm;

namespace T4DB.Entities
{
    [TableMapping("Person")]
    public class PersonEntity : RootEntity
    {
        [Field(Nullable = false)]
        public string FirstName { get; set; }

        [Field(Nullable = false)]
        public string LastName { get; set; }

        [Field(Nullable = false)]
        public string Prefix { get; set; }
    }
}

As you can see we use a RootEntity that is inherited by, in this case, the PersonEntity. We use this construct thoughout the DataModel. What this causes is that each entity has its own unique key throughout all the data (no row in a table has the same key). This is a unique feature of DO and gives us the ability to find entities just by its key. DO takes care of the resolution into the right type for us. Ok so far this little side-step.

After compiling the above Data Model you get an Assembly. An Assembly is a written into a DLL stored on disk. In the case above the name will be ‘My.Model.dll’.

What is present in this Assembly we are able to dissect using reflection. Now we could reflect all the information we need in T4 but that would clutter the templates so we decided to build a second Assembly that is going to gather all the needed information for code generation.

Here’s an example showing the loading of the Data Model Assembly into a Metadata instance in a T4 template.

<#@ output extension=".cs" #>
<#@ Include file="$(SolutionDir)Common.T4\Assemblies.ttinclude" #><#   
    IServiceProvider hostServiceProvider = (IServiceProvider)Host;

    string directoryPath = Host.ResolvePath(@"..\..\My.Model");
    string configName = dte.Solution.SolutionBuild.ActiveConfiguration.Name;
    string assemblyName = directoryPath + @"\bin\Debug\My.Model.dll";

    Assembly assembly = Assembly.LoadFrom(assemblyName);
    Metadata.MetadataContainer.AddAssembly(assembly);
#>

Now of cource we need the Metadata class. Here is the part of that class that loads the assembly from disk into the MetadataContainer using the AddAssembly() method.

using System;
using System.Linq;
using System.Collections.Generic;
using Xtensive.Orm;
using System.Reflection;
using System.Windows.Forms;

namespace My.Metadata
{
    /// <summary>
    /// Metadata container class.
    /// </summary>
    public class Metadata
    {
        private static Metadata m_MetadataContainer;

        /// <summary>
        /// Singleton for obtaining a MetadataContainer object
        /// </summary>
        public static Metadata MetadataContainer
        {
            get
            {
                if (m_MetadataContainer == null)
                {
                    m_MetadataContainer = new Metadata();
                }

                return m_MetadataContainer;
            }
        }
        
        /// <summary>
        /// Add all DO.Net Entity types of an Assembly
        /// </summary>
        public void AddAssembly(Assembly assembly)
        {
            try
            {
                foreach (var entity in assembly.GetTypes())
                {
                    AddEntity(entity);
                }
            }
            catch (ReflectionTypeLoadException lex)
            {
                int hr = System.Runtime.InteropServices.Marshal.GetHRForException(lex);
                MessageBox.Show(lex.LoaderExceptions[0].Message);
            }
            catch (Exception ex)
            {
                MessageBox.Show(ex.Message);
            }
        }

        private List<EntityMetadata> m_Entities;

        /// <summary>
        /// List of Entities in the metadata collection.
        /// </summary>
        public List<EntityMetadata> Entities
        {
            get
            {
                if (m_Entities == null)
                {
                    m_Entities = new List<EntityMetadata>();
                }

                return m_Entities;
            }
        }

        /// <summary>
        /// Add en entity type to the container.
        /// </summary>
        private void AddEntity(Type entity)
        {
            if (!ContainsEntity(entity))
            {
                if (entity.IsSubclassOf(typeof(Entity)))
                {
                    Entities.Add(new EntityMetadata(entity));
                }
            }
        }

        /// <summary>
        /// Is the type already present in the container?
        /// </summary>
        private bool ContainsEntity(Type type)
        {
            foreach (EntityMetadata entityWrapper in Entities)
            {
                if (entityWrapper.Type == type)
                {
                    return true;
                }
            }

            return false;
        }
    }
}

Note: This class may not compile correctly because I combined several snippets from the original code. You can find the complete Metadata class in the attached zip file.

In the AddAssembly() you can see we add all DO entities (the AddEntity adds type Entity only) using the EntityMetadata class.

After loading the Assembly in the Metadata object we have all the DataObject Entities in a collection. Now each of the above mentioned EntityMetadata collects all the member information from the Entity. So after loading the Assembly we have an Entities collection and per Entity a Properties collection through which we can iterate in our template. Below is the MyFirstTemplate.tt taken from the attached zip file.

<#@ output extension=".txt" #>
<#@ Include file="$(SolutionDir)T4Includes\Assemblies.ttinclude" #><#   
    try
    {    // START main try
        // Initialization Output Manager
        Manager outputManager = Manager.Create(Host, GenerationEnvironment);

        // Start Costruction CodeModelTree
        IServiceProvider hostServiceProvider = (IServiceProvider)Host;
        DTE2 dte = (EnvDTE80.DTE2)System.Runtime.InteropServices.Marshal.GetActiveObject("VisualStudio.DTE.11.0");

        string directoryPath = Host.ResolvePath(@"..\My.Model");
        string assemblyName = directoryPath + @"\bin\Debug\My.Model.dll";
        Assembly assembly = Assembly.LoadFrom(assemblyName);
        Metadata.MetadataContainer.AddAssembly(assembly);
#>
// Empty file
<#
        try
        {
            foreach (EntityMetadata entity in Metadata.MetadataContainer.Entities)
            {
                if(entity != null)
                {
                    outputManager.StartNewFile(entity.VMName + ".generated.cs");

#>
using System;

namespace MyNamespace
{
    public partial class <#= entity.Name #>
    {
        public <#= entity.Name #>()
        {
        }
<#
                    foreach (var property in entity.Properties)
                    {
#>
        public <#= property.FieldTypeString #> <#= property.Name #> { get; set; }
<#
                    }
                }
#>
    }
}
<#
            }
        }
        catch (Exception ex)
        {
#>
            // ERROR <#= ex.Message #> 
            // STACK <#= ex.StackTrace #>
<#
        }

        outputManager.Process(true); //write files to disk

    } // END main try
    catch(Exception ex)
    {
        MessageBox.Show("Error in MyFirstTemplate.tt: " + Environment.NewLine + Environment.NewLine + ex.ToString(),"Error in Transformation");
    }
#>
<#+  
#>

After saving this file to disk the template generator will start loading the template and saves the generated output to disk and adds it to the project as subitems from the template.

image

In for instance the VMPersonEntity.generated.cs the following is generated with this template.

using System;

namespace MyNamespace
{
    public partial class PersonEntity
    {
        public PersonEntity()
        {
        }
        public String FirstName { get; set; }
        public String LastName { get; set; }
        public String Prefix { get; set; }
    }
}

Solution: T4.zip

PS: In order to be able to compile this solution you will need to add the DataObjects.Net Nuget package via the Nuget Package Manager.

image

Type ‘Dataobjects’ in the searchbox in the upper right corner and wait for the result.

Click the ‘Install’ button.

image

Select the following projects for installation.

image

After that start the compilation. It is possible the version of DO and PostSharp is different in your new installation. In that case you will have to change the version info in the ‘Assemblies.ttinclude’ file in the T4Include project.

Also remove the RequiresPostSharp.cs files where not needed. That file is only needed in the T4Database project.

Poor man’s weather station

With all the money invested in our Loxone home automation project there currently is no room for a weather station. I need one to steer the heating system. The KNX OT Box of Theben I bought has a program which can take outdoor temperatures into account.

Currently Loxone has a weather service but it is limited to certain countries and the Netherlands is not included (yet). So I started thinking, can I not use weather services on the internet to tell me my local temperature, mold that into a PicoC program and have the service tell my heating the outdoor temperature?

The answer is “Yes you can!”. I’ve cooked up a little PicoC program which will do exactly that. Here’s how to do that:

A WARNING IS IN PLACE HERE! PicoC is not very strict in its syntax checking. A missing curly bracket will still run but gives unpredictable behavior. At one time I had to remove the SD card from the server, format it and reinsert it because my server did not respond to anything anymore. Be careful with what you program and check double check your code before committing it to the server!

You will need an account at a weather service. I used ‘aerisapi.com’. Go to their website and create an account. After you have created your account, register your application and you will get a UserID and UserSecret which will you will need in the code below.

Add a ‘Program’ module to a page in your Loxone Config program.

Loxone Config

Loxone Config

 

I use a lot of Memory flags which help me organize the programming. Here are the settings for the outdoor temperature memory flag. Uncheck the ‘Use as digital flag’. Now you can also change the ‘Unit’ value to reflect the value.

image

Doubleclick the module and paste the program below into the edit window and change the values of <your id> and <your secret> to the values obtained from the website mentioned above.

 

// This programm calls the aerisapi.com weather web service.
// The return value is a JSON string. Since PicoC does not support
// JSON I use scraping of the string to find the values.

enum OutputPorts
{
    Temperature,        // AQ1
    Humidity,            // AQ2
    windKPH             // AQ3
};

int GetIntValue(char *name, int def)
{
    int value = def;

    int pos = strfind(result, name, 0);

    if(pos > 0)
    {
        char *stemp = calloc(1, 10);
        int lenName = strlen(name);

        strncpy(stemp, result + pos + lenName, 5);

        value = atoi(stemp);

        setoutput(Temperature, temp);

        free(stemp);
        stemp = 0;
    }

    printf ("%s = %d", name, value);

    return value;
}

/// <Summary>
/// Main loop.
/// </Summary>
while(TRUE)
{
    char *host = "api.aerisapi.com";
    char *page = "/observations/apeldoorn,nl?client_id=<your id>&client_secret=<your secret>";

    char *result = httpget(host, page);

    if(result != 0)
    {
        int temp = GetIntValue("\"tempC\":", -100);

        if(temp != -100)
        {
            setoutput(Temperature, temp);
        }

        int humidity = GetIntValue("\"humidity\":", -100);

        if(temp != -100)
        {
            setoutput(Humidity, humidity);
        }

        int wind = GetIntValue("\"windKPH\":", -100);

        if(temp != -100)
        {
            setoutput(windKPH, wind);
        }

        free(result);
    }

    // Slow the loop down 10 minutes
    int sleepTime = 10 * 60 * 1000;
    sleep(sleepTime);
}

When you run the code in the server you will see the following in the Log. And the memory flags will show Temperature Celsius, Humidity and Wind velocity.

SNAGHTML8818c0

MCRemote tips & tricks

I often get questions about connection problems when using MCRemote. The rule of thumb is that MCRemote can not read what it can not receive.

Most of the problems are related to the local network. Check the following:

Switch off authentication in MC

A known problem is when Authentication for the Library in MC is switched on. If you do not really need it switch it off for now (I’m working on a solution for the problem).

Authentication

Did you enable the webservice in MC?

Before MC Remote can connect to MC the web service must be started. In MC do the following:

Go to the ‘Media Network’ plugin click the options button. Check that the ‘Use Media Network to share this library and enable DLNA’ is enabled. After confirmation of the settings you should see some activity going on in the ‘Activity log’. As soon as MC Remote is started there should be a lot of activity going on here. To filter out all other connections you can select the server in the ‘Server summary drop down box’.

image

Can I connect to the server MC is running on?

Try and enter the following url in the browser of your phone:

http://<IpAddressOfTheServer>:52199/MCWS/V1/

image

What you see here is a description of the web service MCRemote is relying on. Tapping the different links in the page should give you control over MC (Pause, Play, Next, Previous, etc).

If you can not connect to the service and you don’t see the page above then there is something blocking the connection to the server. There are many reasons why the connection fails and possibly more than one at the same time, so if one suggestions does not help move on to the next leaving the previous suggestion active too:

  • The firewall on the server is blocking port 52199. Try and disable the firewall temporarily to eliminate this cause during testing. Afterwards you can add a rule to the firewall allowing the connection.
  • Virus scanner or Adware on the server is blocking the connection. Try and disable the anti virus and adware software temporarily to eliminate this cause. Look in the description of the software how to allow connection for the port (default 52199).
  • Many routers have restrictions on wireless connections as a means for protecting your local network. These restrictions could block the remote. Possible restrictions are:
    • Network access list which only allows certain MAC Addresses to connect. In that case disable the list temporarily to eliminate the cause during testing. If this is the cause, add the MAC Address of the phone to the list.
    • Blocking of ports other than the standard ones. For instance, most routers allow port 80 which is the standard for Http traffic. All non standard ports are refused. You can use port forwarding or enabling of the default port 52199 of the MC service. In case you choose for port forwarding you should give your phone a fixed IP Address. When you rely on DHCP it is possible the router returns a different IP Address each time the phone connects to the wireless network.
  • Network congestion could be a cause. When you have a lot going on at your local network (many DLNA devices, Bit Torrent downloaders, Skype, VoIP, etc.) it can be that the network is congested meaning that there is not enough bandwidth left for other tasks.
    • Try and disable as many network activity as possible during testing. Stop bit torrent software completely. Disable bit torrents in NAS equipment during testing. Bit torrents also take a lot of upload bandwidth even when no download is in progress.
    • Many routers have a so called QoS (Quality of Service) setting page where you can give certain activities more/less bandwidth to prevent other tasks from getting to little. MC’s traffic will probably be in the ‘Other connections’ since port 52199 is not a standard port.

How do I find the IP Address of the server?

Start a ‘Command box’.

Type (Windows button)+’R’. The ‘Run’ dialog should appear. Type ‘cmd’ and click the Ok button.

SNAGHTML10495af

type ‘ipconfig /all’ (without the single quotes) and press the ‘Enter’ key on your keyboard.

image

The IP Address will be listed for your network connection.

Alternatively in Windows 7 you can also get the IP Address using the Control Panel like below (your network connection will probably have another name then ‘Local Area Connection 2’.

SNAGHTML10c5037